Since the RHET AI Center was founded, the Journalist-in-Residence Program (JIR) has been an integral part of our annual program. Together with Cyber Valley, we enable journalists to spend an extended period of time in Tübingen conducting research on a topic at the intersection of artificial intelligence and journalism.
During their time in Tübingen and at Cyber Valley, journalists have the opportunity to exchange ideas with researchers in the field of artificial intelligence, establish networks, and benefit from the working environment and expertise at Cyber Valley and RHET AI. In recent years, we have welcomed Prof. Christina Elmer, Julia Merlot (SPIEGEL), Bettina Friedrich (MDR), Tobias Asmuth (freelance), and Elena Riedlinger (WDR), Willem de Haan (MDR) and Dr. Anna Henschel (wissenschaftskommunikation.de) as JIRs. During their stay, our JIRs have dealt with a wide range of topics at the interface between AI and journalism. They have researched the impact of AI on climate change, the possibility of using AI tools for fact-checking or distributing journalistic content, and the question of which metaphors we use in society to talk about artificial intelligence.
In 2025, we welcomed Christoph Koch (freelance) as our journalist-in-residence. From October to December, he investigated the extent to which AI systems can act as emotional interlocutors – between alleviating loneliness, creating new dependencies and raising ethical questions. He was interested in the risks and opportunities inherent in emotionally charged interactions with AI.
Christoph has been working as a freelance journalist for over 20 years and writes for brand eins, SZ-Magazin, and Die ZEIT, among others. His work has been recognized with several journalism awards. In his articles and nonfiction books (including "Ich bin dann mal offline" and "Digitale Balance"), he frequently explores the intersections of technology and society.

We met Christoph Koch twice during his time as journalist-in-residence and talked to him about his research, his motivation, and the questions that arose before, during, and after his work on this topic.
Hello Christoph, thank you for taking the time for our interview and giving us an insight into your stay here. Your residency has just begun, so take us along on your journey so far: What motivated you to apply to be a journalist-in-residence?
I heard that a colleague of mine was a JIR guest here a few years ago. So I gave him a quick call to ask him how he liked it and whether he considered the stay to be enriching. Let's put it this way: I started my application on the same day.
During your time here, you are researching and working on the question of to what extent AI systems can act as emotional interlocutors and what opportunities and risks this potentially entails. How did you come up with this question?
I was fascinated by how the use of AI systems is changing. Recently, I have noticed more and more in my own environment, but also in the media, that people are moving from a purely factual and productive use of generative AI to an emotional and intimate use in some cases. Some people now ask chatbots for advice on personal matters, others use them as psychotherapeutic aids or create virtual friends and partners via AI companions. I found this fascinating and wanted to learn more about it.
What appeals to you about this question and the potential answers?
I always find it interesting when a technology develops from a niche for a few people into a larger phenomenon. I happened to be in San Francisco when the first iPhone was launched in June 2007 and bought one out of pure curiosity. I remember how, for the first two years, I was constantly asked what I actually used it for. Eighteen years and billions of app downloads later, we can no longer imagine life without smartphones. Back then, it was difficult to foresee the social dynamics and subsequent developments that smartphones would trigger—and the situation is similar today with AI. I don't know whether emotional AI use will change our lives and society to the same extent. But it is definitely no longer a marginal phenomenon that only affects a few curious individual cases.
Where do you see the relevance of this topic?
Emotions are one of the last areas where we humans still assume that they distinguish us from "the machines" and thus also from AI systems. When it comes to calculating, flying airplanes, and increasingly also reading and writing, we now let machines take the lead. But what happens when more and more people decide to use machines to help them listen, empathize, cope with grief, or satisfy their desire for emotional closeness and/or sex?
What thought processes, research, etc. have taken place so far (before you started as a JIR and since you've been here), and how has this changed your perspective on your question?
Before starting as a JIR, I began by reading extensively on the subject. Since arriving in Tübingen, I have had many productive and informative discussions, from which I have already learned a great deal. Just one example: there is much more to emotional recognition from speech or video signals than I had previously thought. When it comes to topics like this, talking to experts is often more up-to-date and insightful than reading a book that was published several years ago.
Following up on that, in your inaugural lecture you formulated ten questions to start your residency. How did you come up with this list of questions?
I tried to break down the broad topic into specific questions. This was also to show the different disciplines – from psychology to ethics and linguistics to basic AI research – that are linked to it.
1: Can AI systems actually recognize and interpret human emotions reliably?
2: Can AI systems simulate emotions and empathy?
3: Can AI bots help lonely people?
4: Can AI systems provide therapeutic help in mental health crises?
5: Can machines be TOO empathetic?
6: How great is the risk of emotional manipulation by AI?
7: What business interests are linked to emotional AI interaction?
8: How healthy are romantic relationships with artificial intelligence?
9: How can we ensure good and safe emotional interactions with AI systems?
10: What can I contribute as a journalist?
Do you have any predictions about where further questions might arise, or are there aspects of this topic that you hadn't considered before but which are now emerging as relevant to your research question?
Here, too, is just one example of many: I hadn't really considered the aspect of "afterlife AI" before. This refers to people who are essentially resurrected through generative AI because the AI system has been fed material (be it text, sound, photos, or video). These can be private individuals who sit in front of a camera during their lifetime to create suitable material for an afterlife AI. But it could also be dead politicians who are reanimated using previous campaign speeches to appeal to the emotions of their former supporters and gather votes. I find it an interesting question where the ethical boundaries lie within that topic.
What are you hoping to gain from your time here?
As a journalist—especially as a freelancer—I am usually under quite a lot of pressure to be efficient and productive. Research should not take longer than necessary, at least not too often. I consider it a great privilege to be able to spend three months intensively working on a topic and exchanging ideas with experts.
For the second part of the interview, we met with Christoph Koch again towards the end of his JIR residency at the end of December. Against this backdrop, we were particularly interested in the findings of his research and his overall assessment of his residency in Tübingen.
Dear Christoph, it's great to see you again. Your time as a journalist-in-residence is coming to an end. How do you look back on it, and can you already draw some (preliminary) conclusions?
It was a very inspiring time for me, both at Cyber Valley and RHET AI, as well as in Tübingen in general. I was able to make valuable contacts and have valuable conversations here. That was also my hope when I started the program. What made me very happy was realizing that during the three months here, I really had time for my own research and was able to read more books on the subject than I usually do in a whole year. It was also really nice to be part of this network here. People contacted me who were interested in my journalistic perspective for their projects, and I was very happy to be able to contribute my input. I hadn't expected that at the beginning, and it was a nice surprise for me.
To sum up, I would say that I definitely noticed that the topic of emotional AI use is something that concerns a great number of people. It is a highly relevant social issue. I encountered a lot of interest, and the importance of the topic did not wane over the three months, but proved to be relevant time and again.
Before we delve deeper into the content, did you find answers to the ten questions you started with when you began your time here?
For some of them I did, yes, for others I didn't. On the one hand, generative AI and its emotional use is still a very young phenomenon, and on the other hand, the technology is evolving on a weekly basis. But many new questions have arisen.
Is there one answer that stands out for you and was particularly surprising, perhaps because you didn't expect it?
Yes. Although this is only tangentially related to my research topic, what caught me off guard was hearing from experts that explainable AI—in other words, knowing exactly why an AI system arrives at a particular result—is no longer possible. The systems have become so complex and there are so many parameters integrated that we simply can't know for sure anymore. Conversely, this doesn't mean that we shouldn't continue to strive for a certain level of transparency in the systems. But this ideal image—that we understand exactly what is happening in the system and why exactly a certain output is produced, i.e., that we can reconstruct the system's exact calculation methods—is now an illusion.
This is, of course, also a major challenge for my research topic, as in this sensitive field of work it is necessary to ensure that AI systems do not give bad advice or hallucinations to a person who is, for example, currently in a mental crisis situation.
Let us ask you this question straightforward: To what extent can AI systems serve as emotional partners? What does the future look like in this regard?
My research has shown me that many people already use AI systems as emotional partners. Much more than we probably often assume. Unfortunately, there are still very few concrete figures on this, or rather, they vary greatly. It is also a sensitive topic that many people may not be so keen to talk about.
In fact, initial studies have already been conducted, for example on the topic of loneliness. This is a very socially relevant topic; we are now even talking about a "loneliness epidemic." Studies have shown that it is very effective when people feel lonely and then communicate with a chatbot. The feeling of loneliness is alleviated to about the same extent as when talking to another person and significantly more than when watching a YouTube video, for example. However, the effect is not long-lasting. In the short term, communicating with an AI chatbot can be very effective, but it does not change the initial situation. In romantic interactions with chatbots, however, loneliness plays a surprisingly small role. Here, a study showed that it is rather people with strong romantic imaginations or the ability to engage in romantic fantasies who seek closeness with a chatbot.
What are the opportunities and risks of using AI systems as emotional partners?
One opportunity, of course, is being able to offer help around the clock and in a flexible manner. On the risk side, the issue of unpredictability is a major one. Furthermore, there is the question of what AI systems with this inherent over-empathy, which arises when AI systems are trained to always confirm the user's views first, do to people who may not need to hear confirmation of their worldview because they are in a mental health crisis-situation. This is especially true for generative AI systems. For this reason, rule-based systems have primarily been used as emotional interlocutors up to now. However, these are less flexible, sometimes require a prescription, and are therefore often not what the general public uses, which tends to fall back on generative AI systems such as ChatGPT, Gemini, or similar.
AI systems are supposed to compensate for the shortage of therapy places or the effects of the loneliness epidemic, for example. How do you assess the possibilities here, and what consequences do you see for society and the community if we outsource such systemic problems to AI systems?
I have a concrete example that illustrates this point quite well. During my research in Albershausen, I visited a retirement home that is testing a social bot. The residents can interact with it; it tells jokes, engages in conversation, etc., and is well received by most of the residents. However, it does not make the staff's work any easier because they have to carry it around and sometimes maintain it. So I asked myself, wouldn't it be much more helpful to develop a system with suitable hardware that makes the beds, giving employees more time to interact with the residents? However, this is more complex to construct than putting a generative AI system in a plastic case, which is why there is no bed-making robot yet.
The bigger question that this example raises is: Are we delegating and outsourcing the wrong tasks to AI systems? They may be helpful as a stopgap measure, but the goal should, of course, be to close gaps in care and truly solve systemic problems.
What are the ethical implications and consequences of entrusting for-profit tech companies with the task of developing and hosting emotional companions? Do you see any solutions here?
The problem exists. As was previously the case in the tech sector, the technology is concentrated in very few hands, almost all of which are located in the US, with a few exceptions perhaps in China. These are corporations with high profit expectations. Due to the large amount of investment that has gone into AI development, there is enormous commercial pressure on companies, and this is also evident in their products. The aim is to maximize retention time, which is why there are constant queries when using generative AI systems. The products are designed to become firmly embedded in everyday life, create habits, and thus be "sticky".
We have already seen how problematic this can be with the development of social media, and AI systems are currently moving in a similar direction. I find this ethically very questionable and worrying.
The question of solutions is a difficult one. Actually, the approach OpenAI originally started with made sense—namely, as a non-profit alternative to the then-dominant AI players such as Google. Unfortunately, however, this has completely shifted. A company that is truly oriented toward the common good, with no expectations of profit, that only pursues the interests of its users and is secure, would of course be ideal for these sensitive areas. But whether this is a complete utopia, or at least partially achievable, is the question.
What guidelines, legal frameworks, and forms of governance are needed when dealing with and using AI for these sensitive topics?
They are definitely needed. At the moment, most AI companion apps are not regulated and, to my knowledge, are not covered by the AI Act. In the current debate on an age limit for social media, it is also under discussion if it should also be extended to AI companions. But of course, it remains questionable how effective this will be. I think it is very important to protect these sensitive areas. Not everyone is allowed to offer psychotherapy; this is rightly regulated and should be no different for AI systems. I see a great need for action here.
You have also spoken with users, among others. What insights have you gained from these conversations? What do people who use AI systems as emotional companions report?
What people have often told me is that they appreciate the non-judgmental space and see AI systems as contact persons to whom they can safely and openly say what they want to say. In addition, the systems are attributed a surprisingly high degree of objectivity. During my time here, we held a workshop with users, and it was very valuable for me to be able to listen and ask how and why people open up to AI systems as emotional contact persons. What is becoming very clear is that AI companion apps are still relatively little used compared to the common generative language models.
How do you perceive the discourse on this topic? There is currently a growing amount of criticism of AI in general. Where do you see your role as a journalist, especially after your time here?
In the discourse, I think we are currently in a phase of great disenchantment with this technology, and negative views that mix many different issues tend to dominate. These views cover a wide range of topics, such as data protection, jobs, environmental issues, and the economic question of whether and when the AI bubble will burst. Of course, this also carries over to the emotional use of AI tools, which is sometimes highly demonized. A critical view is certainly justified, but I think we should be wary of becoming too arrogant, especially towards users.
I see it as the task of journalism to capture the discourse whenever it becomes excessive – in either direction – and to continually reevaluate points of view. We must clearly identify criticism while remaining open to opportunities.
Have new questions arisen for you?
One question that preoccupies me greatly is whether the emotional use of AI systems makes us more empathetic or causes us to lose empathy. There is no scientific consensus on this yet, and both possibilities are conceivable. Perhaps in the end, it will remain something that varies from person to person and system to system.
What are your next steps with this research?
It's a topic that interests me greatly, but it also resonates widely with the public and seems to interest many other people as well. Accordingly, I will continue to work on it and it will not be completed after my stay here. On January 29, I will give my final presentation, and I will also remain in contact with several experts and continue our discussions over the next few weeks.
Thank you very much, Christoph, for giving us such in-depth insights into your work here with us. We are excited to see how you will continue with the insights and experiences you have gained in Tübingen.
Anyone who would like to hear firsthand about Christoph Koch's time as a journalist-in-residence and is interested in talking to him is cordially invited to attend his final presentation on January 29, 2026. It will take place at 5:00 p.m. at the Max Planck Institute for Intelligent Systems in Tübingen, Max-Planck-Ring 4. The presentation will be held in German. You can register for the final presentation here.