Persuasive Algorithms?

Ein Symposium zur Rhetorik der generativen KI

Internationale Konferenz

12–14 November 2024

Tübingen

Das Zen­trum für rhe­to­ri­sche Wis­sen­schafts­kom­mu­ni­ka­ti­ons­for­schung zur Künst­li­chen Intel­li­genz (RHET AI) ver­an­stal­tet in Koope­ra­ti­on mit dem Max-Planck-Insti­tut für Intel­li­gen­te Sys­te­me Tübingen/Stuttgart (MPI-IS) ein inter­dis­zi­pli­nä­res Sym­po­si­um zu den Grund­la­gen, Funk­ti­ons­wei­sen und kom­mu­ni­ka­ti­ven Impli­ka­tio­nen der Gene­ra­ti­ven KI. Das Sym­po­si­um fin­det vom 12. bis 14. Novem­ber 2024 am MPI-IS in Tübin­gen statt. Die wis­sen­schaft­li­che Dis­kus­si­on wird von einer für alle offe­nen Sci­ence Notes-Ver­an­stal­tung im Rah­men der Wis­sen­schafts- und Inno­va­ti­ons­ta­ge der Uni­ver­si­tät Tübin­gen begleitet.
In Tübin­gen wol­len wir dis­ku­tie­ren, ob und wie sich Kom­mu­ni­ka­ti­on durch den Ein­fluss gene­ra­ti­ven KI ver­än­dert. Im Zen­trum ste­hen dabei inter­dis­zi­pli­nä­re Fra­gen und Pro­ble­me zum Ein­fluss gene­ra­ti­ver KI, ihre tech­ni­schen und kom­mu­ni­ka­ti­ven Rah­men­be­din­gun­gen wie auch mög­li­che gesell­schaft­li­che Ten­den­zen. Die Ver­an­stal­tung will zudem die Rol­le der gene­ra­ti­ven KI als Brü­cke zwi­schen Theo­rie und Pra­xis in KI-For­schung und den Geis­tes­wis­sen­schaf­ten begreif­bar machen. Um dies zu errei­chen, wol­len wir For­schen­de aus ver­schie­de­nen Dis­zi­pli­nen wie maschi­nel­les Ler­nen, Kogni­ti­ons­wis­sen­schaft, Sprach­phi­lo­so­phie, Rhe­to­rik, Lin­gu­is­tik und Medi­en­wis­sen­schaft mit Exper­tin­nen und Exper­ten aus Jour­na­lis­mus, Kul­tur und Wis­sen­schafts­kom­mu­ni­ka­ti­on zusammenbringen.

Die Tagungs­spra­che ist Englisch.

ABOUT

Rese­arch and socie­ty seem to agree that Gene­ra­ti­ve AI is about to pro­found­ly influence our modes of com­mu­ni­ca­ti­on, yet its pre­cise impact remains ambi­guous. Is Tris­tan Har­ris and Ira Raskin’s (Cen­ter of Huma­ne Tech­no­lo­gy) asser­ti­on true that what nuclear wea­pons were to the phy­si­cal world, gene­ra­ti­ve AI is to the vir­tu­al and sym­bo­lic world? Or should we rather aim to rethink the ine­vi­ta­bi­li­ty of AI? Does gene­ra­ti­ve AI genui­ne­ly chall­enge tra­di­tio­nal modes, chan­nels, and media of com­mu­ni­ca­ti­on, or is it sim­ply a con­tri­van­ce to bols­ter the pro­files and pro­fits of tech con­glo­me­ra­tes? Bet­ween the poles of this spec­trum, the­re remains a wide ran­ge of inter­pre­ta­ti­ons deman­ding nuan­ced analysis.

Gras­ping the full scope of gene­ra­ti­ve AI’s impact is thus cri­ti­cal. As gene­ra­ti­ve AI tools beco­me capa­ble of pro­du­cing or co-crea­ting it, the very noti­on of ‘con­tent’ is evol­ving – lea­ding to deba­tes over aut­hor­ship, par­ro­tism and copy­right. Impli­ca­ti­ons of a rhe­to­ri­cal­ly acting AI that con­vin­cin­g­ly mimics human out­put chall­enge us deep­ly, spar­king dis­cus­sions about ‘arti­fi­ci­al influence’. With such a per­sua­si­ve AI’s poten­ti­al to dis­rupt power dyna­mics, espe­ci­al­ly in poli­ti­cal are­nas, its role in the 2024 glo­bal elections—particularly the US elec­tion in Novem­ber—will ser­ve as a cri­ti­cal lit­mus test.

Rhe­to­ri­cal ana­ly­sis has long aimed to under­stand and ethi­cal­ly navi­ga­te the bia­ses and moti­ves of com­mu­ni­ca­ti­on. Gene­ra­ti­ve AI intro­du­ces new pro­blems to this chall­enge, yet the core issue per­sists: dis­cer­ning intent, inte­rests and moti­va­tions. As AI-gene­ra­ted con­tent beco­mes more sophisti­ca­ted, our rhe­to­ri­cal veri­fi­ca­ti­on methods must also evol­ve, par­ti­cu­lar­ly to address pos­si­ble mali­cious human-AI collaborations.

SYMPOSIUM GOALS AND KEY TOPICS

The Crea­ti­on of Realities
What do gene­ra­ti­ve AI's crea­ti­ons reve­al about the natu­re of con­tent and know­ledge? We invi­te explo­ra­ti­on into whe­ther AI-gene­ra­ted images, texts, vide­os, and music are authen­tic arti­facts or con­tin­gent on human inter­ac­tion for their significance.
Pro­blems of persuasion
Does gene­ra­ti­ve AI threa­ten the sta­tus of human aut­hor­ship? To what ext­ent does gene­ra­ti­ve AI use rhe­to­ri­cal struc­tures? Dis­cus­sions should pro­be the rhe­to­ri­cal stra­te­gies embedded in AI's imi­ta­ti­ons, the foun­da­ti­ons of lan­guage models, and AI's capa­ci­ty for con­tex­tu­al under­stan­ding and persuasion.
AI’s world­view and ethics
How does gene­ra­ti­ve AI shape our world­view and ethi­cal reaso­ning (and vice ver­sa)? We encou­ra­ge inves­ti­ga­ti­on into the trust­wort­hi­ness of AI-gene­ra­ted know­ledge, its phi­lo­so­phi­cal impli­ca­ti­ons, and the poten­ti­al of AI to form future narratives.
Deve­lo­ping a promptology
What could a stu­dy of promp­to­lo­gy teach us about AI and human inter­ac­tion? Pro­po­sals could address the rela­ti­onship bet­ween AI out­puts and human com­mu­ni­ca­ti­on, col­la­bo­ra­ti­ve pos­si­bi­li­ties, and stra­te­gies for inte­gra­ting human and AI contributions.
Gene­ra­ti­ve AI’s influence on sci­ence communication
What impli­ca­ti­ons does gene­ra­ti­ve AI have for the com­mu­ni­ca­ti­on of rese­arch in the sci­en­ces and the huma­ni­ties? We seek insight into the chal­lenges posed by AI's ‘black box’ natu­re and its effect on com­mu­ni­ca­ti­on theo­ry, metho­do­lo­gies, and practices.
Gene­ra­ti­ve AI and public discourse
How is gene­ra­ti­ve AI res­ha­ping public com­mu­ni­ca­ti­on and its stu­dy? Sub­mis­si­ons should con­sider how gene­ra­ti­ve AI rede­fi­nes the socio-tech­no­cul­tu­ral frame­work and the inter­di­sci­pli­na­ry roles in cri­ti­quing and sha­ping policy.
Pre­vious
Next

PARTICIPATION

We invi­te scho­lars and prac­ti­tio­ners to par­ti­ci­pa­te in the sym­po­si­um with sub­mis­si­ons for one of two for­mats, indi­vi­du­al papers and pro­blems for small group discussions.

Key topics

  • The Crea­ti­on of Realities
  • Pro­blems of persuasion
  • AI’s world­view and ethics
  • Deve­lo­ping a promptology
  • Gene­ra­ti­ve AI’s influence on sci­ence communication
  • Gene­ra­ti­ve AI and public discourse

Other topics regar­ding the inter­sec­tions of Machi­ne Lear­ning and Com­mu­ni­ca­ti­on, Rhe­to­ric and Gene­ra­ti­ve AI are welcome.

Paper sub­mis­si­ons
Abs­tracts of indi­vi­du­al papers (30 minu­tes inclu­ding dis­cus­sion) should con­tain the title and a sum­ma­ry of the paper of maxi­mal­ly 250 words along with the name of the spea­k­er and full cont­act address (inclu­ding email address).

Pro­blem pitch sub­mis­si­ons
At the sym­po­si­um we want to crea­te space to dis­cuss your pro­blems rela­ted to the impli­ca­ti­ons of gene­ra­ti­ve AI for rese­arch and com­mu­ni­ca­ti­on. Such pro­blems can be con­cep­tua­li­zed in a varie­ty of ways (inter-/trans­di­sci­pli­na­ry, theo­re­ti­cal, prac­ti­cal…) and should be posed with the aim of deli­be­ra­ti­on, con­sul­ta­ti­on, and match­ma­king for pos­si­ble solu­ti­ons. You will be able to pre­sent your pro­blem in small groups through short pit­ches (1–2 min), ide­al­ly accom­pa­nied by mate­ri­al for the dis­cus­sion par­ti­ci­pan­ts. Pro­blems should be sub­mit­ted in no more than 100 words, with the submitter's name and full cont­act infor­ma­ti­on (inclu­ding email address).

Dead­line for abs­tracts and pro­blems: June 14, 2024

Noti­fi­ca­ti­on of accep­tance: Ear­ly July 2024

If you have any ques­ti­ons, plea­se cont­act us by e‑mail at persuasivealgorithms@rhet.ai.

Days
Hours
Minu­tes
Seconds
Die Anmel­dung ist geschlossen. 

Organisation und Kontakt

Cen­ter for Rhe­to­ri­cal Sci­ence Com­mu­ni­ca­ti­on Rese­arch on Arti­fi­ci­al Intel­li­gence (RHET AI)

Spea­k­er

Prof. Dr. Olaf Kra­mer
olaf.kramer@uni-tuebingen.de

Pro­ject Coordination

Dr. Mar­kus Gott­sch­ling
markus.gottschling@uni-tuebingen.de

Eber­hard Karls Uni­ver­si­tät Tübingen

Semi­nar für All­ge­mei­ne Rhe­to­rik
Wil­helm­str. 50
72074 Tübin­gen