AI as Emotional Points of Contact? – An Interview With Journalist in Residence Christoph Koch

Since the RHET AI Cen­ter was foun­ded, the Journ­al­ist-in-Res­id­ence Pro­gram (JIR) has been an integ­ral part of our annu­al pro­gram. Togeth­er with Cyber Val­ley, we enable journ­al­ists to spend an exten­ded peri­od of time in Tübin­gen con­duct­ing research on a top­ic at the inter­sec­tion of arti­fi­cial intel­li­gence and journalism. 

Dur­ing their time in Tübin­gen and at Cyber Val­ley, journ­al­ists have the oppor­tun­ity to exchange ideas with research­ers in the field of arti­fi­cial intel­li­gence, estab­lish net­works, and bene­fit from the work­ing envir­on­ment and expert­ise at Cyber Val­ley and RHET AI. In recent years, we have wel­comed Prof. Christina ElmerJulia Mer­lot (SPIEGEL), Bet­tina Friedrich (MDR), Tobi­as Asmuth (freel­ance), and Elena Ried­linger (WDR), Willem de Haan (MDR) and Dr. Anna Henschel (wissenschaftskommunikation.de) as JIRs. Dur­ing their stay, our JIRs have dealt with a wide range of top­ics at the inter­face between AI and journ­al­ism. They have researched the impact of AI on cli­mate change, the pos­sib­il­ity of using AI tools for fact-check­ing or dis­trib­ut­ing journ­al­ist­ic con­tent, and the ques­tion of which meta­phors we use in soci­ety to talk about arti­fi­cial intelligence. 

In 2025, we wel­comed Chris­toph Koch (freel­ance) as our journ­al­ist-in-res­id­ence. From Octo­ber to Decem­ber, he invest­ig­ated the extent to which AI sys­tems can act as emo­tion­al inter­locutors – between alle­vi­at­ing loneli­ness, cre­at­ing new depend­en­cies and rais­ing eth­ic­al ques­tions. He was inter­ested in the risks and oppor­tun­it­ies inher­ent in emo­tion­ally charged inter­ac­tions with AI.

Chris­toph has been work­ing as a freel­ance journ­al­ist for over 20 years and writes for brand eins, SZ-Magazin, and Die ZEIT, among oth­ers. His work has been recog­nized with sev­er­al journ­al­ism awards. In his art­icles and non­fic­tion books (includ­ing "Ich bin dann mal off­line" and "Digitale Bal­ance"), he fre­quently explores the inter­sec­tions of tech­no­logy and society. 

Photo: Urb­an Zintel

We met Chris­toph Koch twice dur­ing his time as journ­al­ist-in-res­id­ence and talked to him about his research, his motiv­a­tion, and the ques­tions that arose before, dur­ing, and after his work on this topic. 

Hello Chris­toph, thank you for tak­ing the time for our inter­view and giv­ing us an insight into your stay here. Your res­id­ency has just begun, so take us along on your jour­ney so far: What motiv­ated you to apply to be a journalist-in-residence?

I heard that a col­league of mine was a JIR guest here a few years ago. So I gave him a quick call to ask him how he liked it and wheth­er he con­sidered the stay to be enrich­ing. Let's put it this way: I star­ted my applic­a­tion on the same day.

Dur­ing your time here, you are research­ing and work­ing on the ques­tion of to what extent AI sys­tems can act as emo­tion­al inter­locutors and what oppor­tun­it­ies and risks this poten­tially entails. How did you come up with this question?

I was fas­cin­ated by how the use of AI sys­tems is chan­ging. Recently, I have noticed more and more in my own envir­on­ment, but also in the media, that people are mov­ing from a purely fac­tu­al and pro­duct­ive use of gen­er­at­ive AI to an emo­tion­al and intim­ate use in some cases. Some people now ask chat­bots for advice on per­son­al mat­ters, oth­ers use them as psy­cho­thera­peut­ic aids or cre­ate vir­tu­al friends and part­ners via AI com­pan­ions. I found this fas­cin­at­ing and wanted to learn more about it.

What appeals to you about this ques­tion and the poten­tial answers?

I always find it inter­est­ing when a tech­no­logy devel­ops from a niche for a few people into a lar­ger phe­nomen­on. I happened to be in San Fran­cisco when the first iPhone was launched in June 2007 and bought one out of pure curi­os­ity. I remem­ber how, for the first two years, I was con­stantly asked what I actu­ally used it for. Eight­een years and bil­lions of app down­loads later, we can no longer ima­gine life without smart­phones. Back then, it was dif­fi­cult to fore­see the social dynam­ics and sub­sequent devel­op­ments that smart­phones would trigger—and the situ­ation is sim­il­ar today with AI. I don't know wheth­er emo­tion­al AI use will change our lives and soci­ety to the same extent. But it is def­in­itely no longer a mar­gin­al phe­nomen­on that only affects a few curi­ous indi­vidu­al cases.

Where do you see the rel­ev­ance of this topic?

Emo­tions are one of the last areas where we humans still assume that they dis­tin­guish us from "the machines" and thus also from AI sys­tems. When it comes to cal­cu­lat­ing, fly­ing air­planes, and increas­ingly also read­ing and writ­ing, we now let machines take the lead. But what hap­pens when more and more people decide to use machines to help them listen, empath­ize, cope with grief, or sat­is­fy their desire for emo­tion­al close­ness and/or sex? 

What thought pro­cesses, research, etc. have taken place so far (before you star­ted as a JIR and since you've been here), and how has this changed your per­spect­ive on your question?

Before start­ing as a JIR, I began by read­ing extens­ively on the sub­ject. Since arriv­ing in Tübin­gen, I have had many pro­duct­ive and inform­at­ive dis­cus­sions, from which I have already learned a great deal. Just one example: there is much more to emo­tion­al recog­ni­tion from speech or video sig­nals than I had pre­vi­ously thought. When it comes to top­ics like this, talk­ing to experts is often more up-to-date and insight­ful than read­ing a book that was pub­lished sev­er­al years ago.

Fol­low­ing up on that, in your inaug­ur­al lec­ture you for­mu­lated ten ques­tions to start your res­id­ency. How did you come up with this list of questions?

I tried to break down the broad top­ic into spe­cif­ic ques­tions. This was also to show the dif­fer­ent dis­cip­lines – from psy­cho­logy to eth­ics and lin­guist­ics to basic AI research – that are linked to it.

Do you have any pre­dic­tions about where fur­ther ques­tions might arise, or are there aspects of this top­ic that you hadn't con­sidered before but which are now emer­ging as rel­ev­ant to your research question?

Here, too, is just one example of many: I hadn't really con­sidered the aspect of "after­life AI" before. This refers to people who are essen­tially resur­rec­ted through gen­er­at­ive AI because the AI sys­tem has been fed mater­i­al (be it text, sound, pho­tos, or video). These can be private indi­vidu­als who sit in front of a cam­era dur­ing their life­time to cre­ate suit­able mater­i­al for an after­life AI. But it could also be dead politi­cians who are rean­im­ated using pre­vi­ous cam­paign speeches to appeal to the emo­tions of their former sup­port­ers and gath­er votes. I find it an inter­est­ing ques­tion where the eth­ic­al bound­ar­ies lie with­in that topic.

What are you hop­ing to gain from your time here?

As a journalist—especially as a freelancer—I am usu­ally under quite a lot of pres­sure to be effi­cient and pro­duct­ive. Research should not take longer than neces­sary, at least not too often. I con­sider it a great priv­ilege to be able to spend three months intens­ively work­ing on a top­ic and exchan­ging ideas with experts.

For the second part of the inter­view, we met with Chris­toph Koch again towards the end of his JIR res­id­ency at the end of Decem­ber. Against this back­drop, we were par­tic­u­larly inter­ested in the find­ings of his research and his over­all assess­ment of his res­id­ency in Tübingen.

Dear Chris­toph, it's great to see you again. Your time as a journ­al­ist-in-res­id­ence is com­ing to an end. How do you look back on it, and can you already draw some (pre­lim­in­ary) conclusions?

It was a very inspir­ing time for me, both at Cyber Val­ley and RHET AI, as well as in Tübin­gen in gen­er­al. I was able to make valu­able con­tacts and have valu­able con­ver­sa­tions here. That was also my hope when I star­ted the pro­gram. What made me very happy was real­iz­ing that dur­ing the three months here, I really had time for my own research and was able to read more books on the sub­ject than I usu­ally do in a whole year. It was also really nice to be part of this net­work here. People con­tac­ted me who were inter­ested in my journ­al­ist­ic per­spect­ive for their pro­jects, and I was very happy to be able to con­trib­ute my input. I hadn't expec­ted that at the begin­ning, and it was a nice sur­prise for me.

To sum up, I would say that I def­in­itely noticed that the top­ic of emo­tion­al AI use is some­thing that con­cerns a great num­ber of people. It is a highly rel­ev­ant social issue. I encountered a lot of interest, and the import­ance of the top­ic did not wane over the three months, but proved to be rel­ev­ant time and again.

Before we delve deep­er into the con­tent, did you find answers to the ten ques­tions you star­ted with when you began your time here?

For some of them I did, yes, for oth­ers I didn't. On the one hand, gen­er­at­ive AI and its emo­tion­al use is still a very young phe­nomen­on, and on the oth­er hand, the tech­no­logy is evolving on a weekly basis. But many new ques­tions have arisen.

Is there one answer that stands out for you and was par­tic­u­larly sur­pris­ing, per­haps because you didn't expect it?

Yes. Although this is only tan­gen­tially related to my research top­ic, what caught me off guard was hear­ing from experts that explain­able AI—in oth­er words, know­ing exactly why an AI sys­tem arrives at a par­tic­u­lar result—is no longer pos­sible. The sys­tems have become so com­plex and there are so many para­met­ers integ­rated that we simply can't know for sure any­more. Con­versely, this doesn't mean that we shouldn't con­tin­ue to strive for a cer­tain level of trans­par­ency in the sys­tems. But this ideal image—that we under­stand exactly what is hap­pen­ing in the sys­tem and why exactly a cer­tain out­put is pro­duced, i.e., that we can recon­struct the system's exact cal­cu­la­tion methods—is now an illusion. 

This is, of course, also a major chal­lenge for my research top­ic, as in this sens­it­ive field of work it is neces­sary to ensure that AI sys­tems do not give bad advice or hal­lu­cin­a­tions to a per­son who is, for example, cur­rently in a men­tal crisis situation. 

Let us ask you this ques­tion straight­for­ward: To what extent can AI sys­tems serve as emo­tion­al part­ners? What does the future look like in this regard?

My research has shown me that many people already use AI sys­tems as emo­tion­al part­ners. Much more than we prob­ably often assume. Unfor­tu­nately, there are still very few con­crete fig­ures on this, or rather, they vary greatly. It is also a sens­it­ive top­ic that many people may not be so keen to talk about.

In fact, ini­tial stud­ies have already been con­duc­ted, for example on the top­ic of loneli­ness. This is a very socially rel­ev­ant top­ic; we are now even talk­ing about a "loneli­ness epi­dem­ic." Stud­ies have shown that it is very effect­ive when people feel lonely and then com­mu­nic­ate with a chat­bot. The feel­ing of loneli­ness is alle­vi­ated to about the same extent as when talk­ing to anoth­er per­son and sig­ni­fic­antly more than when watch­ing a You­Tube video, for example. How­ever, the effect is not long-last­ing. In the short term, com­mu­nic­at­ing with an AI chat­bot can be very effect­ive, but it does not change the ini­tial situ­ation. In romantic inter­ac­tions with chat­bots, how­ever, loneli­ness plays a sur­pris­ingly small role. Here, a study showed that it is rather people with strong romantic ima­gin­a­tions or the abil­ity to engage in romantic fantas­ies who seek close­ness with a chatbot.

What are the oppor­tun­it­ies and risks of using AI sys­tems as emo­tion­al partners?

One oppor­tun­ity, of course, is being able to offer help around the clock and in a flex­ible man­ner. On the risk side, the issue of unpre­dict­ab­il­ity is a major one. Fur­ther­more, there is the ques­tion of what AI sys­tems with this inher­ent over-empathy, which arises when AI sys­tems are trained to always con­firm the user's views first, do to people who may not need to hear con­firm­a­tion of their world­view because they are in a men­tal health crisis-situ­ation. This is espe­cially true for gen­er­at­ive AI sys­tems. For this reas­on, rule-based sys­tems have primar­ily been used as emo­tion­al inter­locutors up to now. How­ever, these are less flex­ible, some­times require a pre­scrip­tion, and are there­fore often not what the gen­er­al pub­lic uses, which tends to fall back on gen­er­at­ive AI sys­tems such as Chat­G­PT, Gem­ini, or similar. 

AI sys­tems are sup­posed to com­pensate for the short­age of ther­apy places or the effects of the loneli­ness epi­dem­ic, for example. How do you assess the pos­sib­il­it­ies here, and what con­sequences do you see for soci­ety and the com­munity if we out­source such sys­tem­ic prob­lems to AI systems?

I have a con­crete example that illus­trates this point quite well. Dur­ing my research in Alber­shausen, I vis­ited a retire­ment home that is test­ing a social bot. The res­id­ents can inter­act with it; it tells jokes, engages in con­ver­sa­tion, etc., and is well received by most of the res­id­ents. How­ever, it does not make the staff's work any easi­er because they have to carry it around and some­times main­tain it. So I asked myself, wouldn't it be much more help­ful to devel­op a sys­tem with suit­able hard­ware that makes the beds, giv­ing employ­ees more time to inter­act with the res­id­ents? How­ever, this is more com­plex to con­struct than put­ting a gen­er­at­ive AI sys­tem in a plastic case, which is why there is no bed-mak­ing robot yet. 

The big­ger ques­tion that this example raises is: Are we del­eg­at­ing and out­sourcing the wrong tasks to AI sys­tems? They may be help­ful as a stop­gap meas­ure, but the goal should, of course, be to close gaps in care and truly solve sys­tem­ic problems.

What are the eth­ic­al implic­a­tions and con­sequences of entrust­ing for-profit tech com­pan­ies with the task of devel­op­ing and host­ing emo­tion­al com­pan­ions? Do you see any solu­tions here?

The prob­lem exists. As was pre­vi­ously the case in the tech sec­tor, the tech­no­logy is con­cen­trated in very few hands, almost all of which are loc­ated in the US, with a few excep­tions per­haps in China. These are cor­por­a­tions with high profit expect­a­tions. Due to the large amount of invest­ment that has gone into AI devel­op­ment, there is enorm­ous com­mer­cial pres­sure on com­pan­ies, and this is also evid­ent in their products. The aim is to max­im­ize reten­tion time, which is why there are con­stant quer­ies when using gen­er­at­ive AI sys­tems. The products are designed to become firmly embed­ded in every­day life, cre­ate habits, and thus be "sticky".

We have already seen how prob­lem­at­ic this can be with the devel­op­ment of social media, and AI sys­tems are cur­rently mov­ing in a sim­il­ar dir­ec­tion. I find this eth­ic­ally very ques­tion­able and worrying. 

The ques­tion of solu­tions is a dif­fi­cult one. Actu­ally, the approach OpenAI ori­gin­ally star­ted with made sense—namely, as a non-profit altern­at­ive to the then-dom­in­ant AI play­ers such as Google. Unfor­tu­nately, how­ever, this has com­pletely shif­ted. A com­pany that is truly ori­ented toward the com­mon good, with no expect­a­tions of profit, that only pur­sues the interests of its users and is secure, would of course be ideal for these sens­it­ive areas. But wheth­er this is a com­plete uto­pia, or at least par­tially achiev­able, is the question.

What guidelines, leg­al frame­works, and forms of gov­ernance are needed when deal­ing with and using AI for these sens­it­ive topics?

They are def­in­itely needed. At the moment, most AI com­pan­ion apps are not reg­u­lated and, to my know­ledge, are not covered by the AI Act. In the cur­rent debate on an age lim­it for social media, it is also under dis­cus­sion if it should also be exten­ded to AI com­pan­ions. But of course, it remains ques­tion­able how effect­ive this will be. I think it is very import­ant to pro­tect these sens­it­ive areas. Not every­one is allowed to offer psy­cho­ther­apy; this is rightly reg­u­lated and should be no dif­fer­ent for AI sys­tems. I see a great need for action here.

You have also spoken with users, among oth­ers. What insights have you gained from these con­ver­sa­tions? What do people who use AI sys­tems as emo­tion­al com­pan­ions report?

What people have often told me is that they appre­ci­ate the non-judg­ment­al space and see AI sys­tems as con­tact per­sons to whom they can safely and openly say what they want to say. In addi­tion, the sys­tems are attrib­uted a sur­pris­ingly high degree of objectiv­ity. Dur­ing my time here, we held a work­shop with users, and it was very valu­able for me to be able to listen and ask how and why people open up to AI sys­tems as emo­tion­al con­tact per­sons. What is becom­ing very clear is that AI com­pan­ion apps are still rel­at­ively little used com­pared to the com­mon gen­er­at­ive lan­guage models. 

How do you per­ceive the dis­course on this top­ic? There is cur­rently a grow­ing amount of cri­ti­cism of AI in gen­er­al. Where do you see your role as a journ­al­ist, espe­cially after your time here?

In the dis­course, I think we are cur­rently in a phase of great dis­en­chant­ment with this tech­no­logy, and neg­at­ive views that mix many dif­fer­ent issues tend to dom­in­ate. These views cov­er a wide range of top­ics, such as data pro­tec­tion, jobs, envir­on­ment­al issues, and the eco­nom­ic ques­tion of wheth­er and when the AI bubble will burst. Of course, this also car­ries over to the emo­tion­al use of AI tools, which is some­times highly demon­ized. A crit­ic­al view is cer­tainly jus­ti­fied, but I think we should be wary of becom­ing too arrog­ant, espe­cially towards users. 

I see it as the task of journ­al­ism to cap­ture the dis­course whenev­er it becomes excess­ive – in either dir­ec­tion – and to con­tinu­ally ree­valu­ate points of view. We must clearly identi­fy cri­ti­cism while remain­ing open to opportunities.

Have new ques­tions aris­en for you? 

One ques­tion that pre­oc­cu­pies me greatly is wheth­er the emo­tion­al use of AI sys­tems makes us more empath­et­ic or causes us to lose empathy. There is no sci­entif­ic con­sensus on this yet, and both pos­sib­il­it­ies are con­ceiv­able. Per­haps in the end, it will remain some­thing that var­ies from per­son to per­son and sys­tem to system.

What are your next steps with this research?

It's a top­ic that interests me greatly, but it also res­on­ates widely with the pub­lic and seems to interest many oth­er people as well. Accord­ingly, I will con­tin­ue to work on it and it will not be com­pleted after my stay here. On Janu­ary 29, I will give my final present­a­tion, and I will also remain in con­tact with sev­er­al experts and con­tin­ue our dis­cus­sions over the next few weeks. 

Thank you very much, Chris­toph, for giv­ing us such in-depth insights into your work here with us. We are excited to see how you will con­tin­ue with the insights and exper­i­ences you have gained in Tübingen. 

Any­one who would like to hear firsthand about Chris­toph Koch's time as a journ­al­ist-in-res­id­ence and is inter­ested in talk­ing to him is cor­di­ally invited to attend his final present­a­tion on Janu­ary 29, 2026. It will take place at 5:00 p.m. at the Max Planck Insti­tute for Intel­li­gent Sys­tems in Tübin­gen, Max-Planck-Ring 4. The present­a­tion will be held in Ger­man. You can register for the final present­a­tion here.