Recap: Persuasive Algorithms Conference 12.-14th of November 2024

Last year in Novem­ber, our grand event, the Per­suas­ive Algorithms Con­fer­ence, took place at the Max-Planck-Insti­tute for Intel­li­gent Sys­tems (MPI-IS) in Tübin­gen. Par­ti­cipants trav­elled from many dif­fer­ent Ger­man uni­ver­sit­ies, such TU Braun­sch­weig, HU Ber­lin, or LMU München to Tübin­gen. There were also many attendees from inter­na­tion­al uni­ver­sit­ies, includ­ing research­ers from Ford­ham Uni­ver­sity in New York, the Ger­man-Swiss Uni­ver­sity St. Gal­len, or the Uni­versität Zürich. Alto­geth­er, the con­fer­ence brought togeth­er dis­cip­lines like journ­al­ism, machine learn­ing, edu­ca­tion, and, of course, sci­ence com­mu­nic­a­tion, rhet­or­ic, and media stud­ies. Here in this con­fer­ence review, you can find out which ques­tions were raised, which per­spect­ives were explored, and which impulses were provided for the dis­course on artific­al intelligence.

Tues­day, 12th of Novem­ber 2024

On Tues­day, Olaf Kramer and Markus Gott­schling (both from the Uni­ver­sity of Tübin­gen) opened the three-day con­fer­ence and also intro­duced the RhetAI Coali­tion. The RhetAI Coali­tion is an inter­na­tion­al research net­work foun­ded by the RHET AI Cen­ter in col­lab­or­a­tion with Stony Brook Uni­ver­sity, the Cen­ter for Humane Tech­no­logy, and, since 2024, the Auburn Uni­ver­sity. The coali­tion aims to pro­mote the exchange between aca­demia, non-profit part­ners, and the AI industry in order to devel­op respons­ible frame­works for address­ing the cur­rently uncon­trolled per­suas­ive powers of arti­fi­cial intelligence. 

Among oth­er things, they announced the future col­lab­or­a­tion between the RHET AI Cen­ter, the Cen­ter for Humane Tech­no­logy, and Stony Brook Uni­ver­sity. The aim of this part­ner­ship is to pro­mote inter­na­tion­al exchange between the three centres. It will also offer staff the oppor­tun­ity to spend time at the respect­ive part­ner loc­a­tions in Tübin­gen or New York.

What's Character.AI all about?

Character.AI is clas­si­fied as a so-called "com­pan­ion chat­bot". Behind this open-access app are its founders, Noam Shaz­eer and Daniel De Freitas. Mil­lions of users, espe­cially teen­agers, use the app to cre­ate bots designed to replace human par­ents, part­ners, or friends. The bot mim­ics human beha­viour (keyword: anthro­po­morph­ism of AI) and usu­ally responds sup­port­ively to the com­plaints of its human coun­ter­part. As a res­ult, its com­ments can some­times take a drastic, dark, inap­pro­pri­ate, or even viol­ent turn. It is there­fore cru­cial to always main­tain an emo­tion­al dis­tance from AI bots. 

Fol­low­ing this, the first key­note, "'Press 0 to Nev­er Speak to an Oper­at­or': On Enter­ing an AI-Powered Bur­eau­crat­ic Labyrinth", was delivered by Casey Mock (Cen­ter for Humane Tech­no­logy). Mock focused on the ques­tion of who con­trols the data used to train AI sys­tems. The cent­ral issue – arti­fi­cial intim­acy – was illus­trated using the case of Character.AI.

Arti­fi­cial intim­acy describes the illu­sion of a close, per­son­al rela­tion­ship between humans and machines, cre­ated by AI-powered chat­bots. Through nat­ur­al-sound­ing com­mu­nic­a­tion and empath­et­ic responses, these bots sim­u­late human close­ness without actu­ally exper­i­en­cing real emo­tions or form­ing genu­ine bonds. Some people can be misled by this and devel­op an emo­tion­al attach­ment to the bots, feel­ing val­id­ated and under­stood by them and seek­ing their advice instead of turn­ing to their fel­low humans.

The final ses­sion on the morn­ing pro­gramme was a joint lec­ture by Olaf Kramer and Markus Gott­schling on the rhet­or­ic of gen­er­at­ive AI, round­ing off this first part of the conference.

Fur­ther fas­cin­at­ing present­a­tions fol­lowed after a one-hour lunch break, includ­ing those by machine learn­ing sci­ent­ists Bob Wil­li­am­son and Michael Franke (both Uni­ver­sity of Tübin­gen) on the top­ics "The Rhet­or­ic of Machine Learn­ing" and "Under­stand­ing Lan­guage Mod­els: The Japan­ese Room Argu­ment, or What It’s Like to Be a Lan­guage Mod­el". These impressed with their intel­lec­tu­al depth, as did the prob­lem pitches that fol­lowed after a short cof­fee break. These were a high­light of the day: as a new con­fer­ence format that encour­aged act­ive par­ti­cip­a­tion in dis­cus­sions, it was highly appre­ci­ated by the attendees. 

How do problem pitches work?

The "prob­lem pitch" format cre­ates space for par­ti­cipants to dis­cuss prob­lems and ques­tions they encounter dur­ing their research on gen­er­at­ive AI with oth­er research­ers. These issues could be presen­ted in a vari­ety of forms, such as con­cepts (inter- or trans­dis­cip­lin­ary, the­or­et­ic­al, prac­tic­al, or oth­ers). They were to be for­mu­lated with the aim of reflec­tion, advice, and the iden­ti­fic­a­tion of poten­tial solu­tions. These issues are then dis­cussed in small groups.

The pitches were giv­en by Anna Henschel (edit­or at Wissenschaftskommunikation.de) with "AI & Journ­al­ism" and Nina Kal­wa (KIT) togeth­er with Lukas Griessl (Uni­ver­sity of Tübin­gen) and Tobi­as Kreutzer (TU Dortmund) on the top­ic "AI & Sci­ence Com­mu­nic­a­tion". Henschel used her speak­ing time to ask the par­ti­cipants wheth­er they were dis­sat­is­fied with the cur­rent media cov­er­age of AI and to dis­cuss how journ­al­ists could use clear­er lan­guage when com­mu­nic­a­tiong about AI. Kal­wa, Griessl, and Kreutzer focused their con­tri­bu­tion on the chal­lenges AI poses for sci­ence communication. 

The goal of the pitches was, on the one hand, to provide research­ers with a plat­form and, on the oth­er hand, to encour­age act­ive exchange that would offer the pitch­ers valu­able impulses for their work while sim­ul­tan­eously inform­ing par­ti­cipants about the status of their research. It was also note­worthy that one of the ori­gin­ally planned pitches was can­celled at short notice, but a group spon­tan­eously gathered around the top­ic, enga­ging in an intens­ive dis­cus­sion des­pite the lack of pre­pared input.

To con­clude the first day of the con­fer­ence, tech journ­al­ist Eva Wolfan­gel (MPI-IS) mod­er­ated a pan­el dis­cus­sion titled "Is AI Inev­it­able?". In her intro­duc­tion, she described AI as a "power (struggle) tech­no­logy". The par­ti­cipants Mor­itz Hardt (Dir­ect­or MPI-IS Tübin­gen), Annette Leßmöll­mann (KIT) and Esth­er Greuss­ing (TU Braun­sch­weig) spoke about the trans­form­at­ive impacts of gen­er­at­ive AI on com­mu­nic­a­tion, edu­ca­tion, and fair­ness. Dur­ing the dis­cus­sion, Wolfan­gel as the host raised the ques­tion of who actu­ally has access to AI and who can con­trol it – espe­cially in the con­text of sci­ence com­mu­nic­a­tion and pub­lic per­cep­tion. What does author­ity mean in such a soci­ety, and who will exer­cise it? This ques­tion led to fur­ther pro­found reflec­tions: Greuss­ing asked what expert­ise means today and how it might change when AI acts as an "expert". Leßmöll­mann wanted to know how AIs, large lan­guage mod­els, or Social Media could sup­port demo­cracy. The dis­cus­sion quickly made it clear: every­one must act­ively engange in the AI dis­course and find their pos­i­tion in this field.

After this diverse and inform­at­ive first day of the Per­suas­ive Algorithms con­fer­ence, par­ti­cipants headed to Neck­ar­müller, a well-known Tübin­gen res­taur­ant serving Swa­bi­an spe­ci­al­it­ies. This allowed every­one to recharge and start the next day refreshed. 

Wed­nes­day, 13th of Novem­ber 2024

Impres­sions of the second day of the con­fer­ence. Photo: @Sil­vester Keller

On Wed­nes­day morn­ing, two pan­els focused on the over­arch­ing themes "AI, Polit­ics and Their Media" and "Towards AI Lit­er­acy". In the first pan­el, Melanie Lei­deck­er-Sand­mann and Tabea Lüders (both KIT) spoke on "More than Humanoid Robots and Cyborgs? How Ger­man Print Media Visu­al­ize Art­icles on Artific­al Intel­li­gence". The two research­ers are also explor­ing this ques­tion in their cur­rent pre­print, which they are devel­op­ing togeth­er with three oth­er schol­ars. Oth­er present­a­tions in this pan­el included "GenAI and the Dis­rup­tion of Anim­a­tion Pro­duc­tion" by Erwin Fey­er­sing­er, an anim­a­tion expert of media stud­ies at the Uni­ver­sity of Tübin­gen, and "From Pub­lic Dis­course to Private Echo? The Role of Select­ive Expos­ure in Polit­ic­al Inter­ac­tions with Chat­G­PT" by Cor­ne­lia Sin­der­mann (Uni­ver­sity of Stuttgart).

In a second pan­el, which took place sim­ul­tan­eously, Ole Engel and Elisa­beth May­weg (HU Ber­lin) dis­cussed "Reas­on­ing with Gen­er­at­ive Arti­fi­cial Intel­li­gence". Jan Batzn­er from the Weizen­baum Insti­tute gave a present­a­tion on "Ger­man­PartiesQA: Bench­mark­ing Com­mer­cial Large Lan­guage Mod­el for Polit­ic­al Bias and Syco­phancy" and Char­ley Wu (Uni­ver­sity of Tübin­gen) spoke about "The Lib­rar­i­an of Babel" in the con­text of AI literacy.

After a break, the con­fer­ence con­tin­ued with prob­lem pitches on the top­ics "AI & Pub­lic Dis­course", "AI and Cre­ativ­ity" and "AI & Rhet­or­ic". Dur­ing the pitch "AI & Pub­lic Dis­course", for example, par­ti­cipants debated vari­ous aspects of AI's role in dis­course and pub­lic dis­cus­sions in small groups. (Photo: @Sil­vester Keller)

In the after­noon, par­ti­cipants had the oppor­tun­ity to gain new insights through fur­ther present­a­tions in the pan­els "Inclus­ive Sci­ence Com­mu­nic­a­tion" and "Towards AI Lit­er­acy". The top­ics ranged from "AI Fora: Inclus­ive Sci­ence Com­mu­nic­a­tion" to "Gen­er­at­ive AI as Ped­ago­gic­al Machine".

Addi­tion­ally, Zoltan Majdik (North Dakota State Uni­ver­sity) explored the top­ic "Bridging the gap: A Frame­work for Rhet­or­ic­al Audits of Large Lan­guage Mod­els" in depth, examin­ing how rhet­or­ic­al ele­ments can be applied to AI sys­tems. He focused par­tic­u­larly on rhet­or­ic­al tropes, delib­er­at­ive norms, and semant­ic com­plex­ity. Majdik demon­strated that gen­er­at­ive AI often pro­duces meta­phors such as war imagery that shape the dis­course around AI. He also high­lighted the import­ance of dis­course val­ues like valid­ity, which play a cru­cial role in the accept­ance of AI-gen­er­ated con­tri­bu­tions. Majdik emphas­ised the need to cre­ate semantic­ally rich con­tent to make com­mu­nic­a­tion more per­suas­ive and impact­ful. This, in turn, can help integ­rate AI sys­tems more effect­ively into exist­ing discourses.

Bernhard Schölkopf (MPI-IS/ELLIS Tübin­gen) delivered the final key­note of the day. His research draws on Jorge Luis Borges, link­ing the philosopher's per­spect­ives to arti­fi­cial intel­li­gence in the mod­ern era. Con­sequently, his talk focused on the concept of cul­tur­al learn­ing, which is cent­ral to human under­stand­ing and com­mu­nic­a­tion but has so far been largely over­looked by AI research­ers. Schölkopf's approach advoc­ates train­ing mod­els spe­cific­ally for cul­tur­al learn­ing. This is par­tic­u­larly import­ant because humans often per­ceive the world through fic­tion and nar­rat­ives, mak­ing cul­tur­al and fic­tion­al know­ledge essen­tial for human-machine com­mu­nic­a­tion. At present, how­ever, AI remains at the level of pat­tern recog­ni­tion and is still incap­able of under­stand­ing causality. 

Thursday, 14th of Novem­ber 2024

What is an AI-Ouroboros?

Anoth­er key top­ic in Mike Schäfer's talk was the concept of the AI-Ouroboros. This refers to the increas­ing phe­nomen­on of AI gen­er­at­ors being trained on AI-gen­er­ated con­tent. The term draws from ancienct myth­o­logy, where the Ouroboros, a ser­pent bit­ing its own tail, sym­bol­ises a self-per­petu­at­ing cycle. Applied to AI, this means that the con­tent pro­duced by arti­fi­cial intel­li­gence serves as train­ing mater­i­al for fur­ther AI mod­els. This cre­ates a self-rein­for­cing loop in which AI sys­tems rely on their own out­puts to learn and gen­er­ate new con­tent, poten­tially redu­cing human input in the process. 

The morn­ing lec­tures on the final day of the event explored the inter­sec­tion of lit­er­ary stud­ies, philo­sophy, and rhet­or­ic with AI. The day began with a key­note by Mike Schäfer (Uni­ver­sity of Zürich) on sci­ence com­mu­nic­a­tion in the age of AI. His talk high­lighted the pro­found and last­ing impact of arti­fi­cial intel­li­gence on sci­ence com­mu­nic­a­tion – both as an oppor­tun­ity and a chal­lange. While AI can help foster evid­ence-based dis­cus­sions and improve access to inform­a­tion, it also poses risks, par­tic­u­larly in the spread of mis­in­form­a­tion. The grow­ing use of AI tools such as JECT.AI illus­trates how work­flows in journ­al­ism and sci­ence com­mu­nic­a­tion are evolving. These devel­op­ments make it clear that a crit­ic­al and respons­ible approach to AI is essen­tial – not only to har­ness its poten­tial effect­ively but also to recog­nise its limitations. 

The final pan­el of the Per­suas­ive Algorithms con­fer­ence explored vari­ous sub­top­ics under the theme "Rhet­or­ic­al Con­cepts of Gen­er­at­ive AI". Among the speak­ers, Fabi­an Erhardt (Uni­ver­sity of Tübin­gen) presen­ted on "Algorithmic Per­sua­sion as Meta­cog­nit­ive Per­sua­sion". In his research, Erhardt exam­ines how AI-driv­en per­sua­sion strategies relate to meta­cog­nit­ive pro­cesses. His ana­lys­is focused on wheth­er and to what extent algorithmic forms of com­mu­nic­a­tion spe­cific­ally tar­get meta­cog­nit­ive mech­an­isms – that is, the ways in which people reflect on their own think­ing and beliefs. 

After one final shared lunch, many pos­it­ive remarks, and farewells in the crisp Novem­ber air, the par­ti­cipants went their sep­ar­ate ways – leav­ing with a strong desire for a future edi­tion of this multi-day event, which brought togeth­er research­ers from vari­ous dis­cip­lines for mean­ing­ful exchange. (Photo: @Sil­vester Keller)

For those inter­ested, the even­ing fea­tured the Sci­ence Notes event at the cinema museum, hos­ted by Olaf Kramer. As part of the Sci­ence and Innov­a­tion Days at the Uni­ver­sity of Tübin­gen and accom­pan­ied by the Cog­nit­ive Sci­ence Cen­ter, the event focused on explor­ing and dis­cuss­ing strategies for design­ing online-debates fairer and more con­struct­ive, con­trib­ut­ing to the strengh­en­ing of demo­crat­ic cul­ture. In short: how can we learn to "argue better".

We would like to thank the organ­ising team and all people con­trib­ut­ing who made this out­stand­ing event pos­sible in the first place. 

You can find the con­fer­ence announce­ment and sched­ule for 2024 here.