Persuasive Algorithms?

The Rhetorics of Generative AI

International Conference

Persuasive Algorithms? — The Rhetorics of Generative AI

November 12–14, 2024

Tübingen

The Cen­ter for Rhet­or­ic­al Sci­ence Com­mu­nic­a­tion Research on Arti­fi­cial Intel­li­gence (RHET AI) in cooper­a­tion with the Max Planck Insti­tute for Intel­li­gent Sys­tems Tübingen/Stuttgart (MPI-IS) is organ­iz­ing an inter­dis­cip­lin­ary sym­posi­um on the found­a­tions, func­tion­al­it­ies, and com­mu­nic­at­ive implic­a­tions of Gen­er­at­ive AI. The sym­posi­um will take place on Novem­ber 12–14, 2024, at the MPI-IS in Tübin­gen, and the sci­entif­ic dis­cus­sion will be accom­pan­ied by a pub­lic Sci­ence Notes event as part of the Sci­ence and Innov­a­tion Days of the Uni­ver­sity of Tübingen.

In Tübin­gen, we want to dis­cuss and nav­ig­ate expec­ted changes in com­mu­nic­a­tion through the advent of gen­er­at­ive AI. We aim to address inter­dis­cip­lin­ary ques­tions and prob­lems about the influ­ence of gen­er­at­ive AI, its tech­nic­al and com­mu­nic­at­ive para­met­ers, and pos­sible soci­et­al devel­op­ments. The event will emphas­ize the role of gen­er­at­ive AI as a bridge between the­ory and prac­tice in AI research and the humanities.

To achieve this, we aim to bring togeth­er research­ers from a diverse set of dis­cip­lines such as machine learn­ing, cog­nit­ive sci­ence, philo­sophy of lan­guage, rhet­or­ic, lin­guist­ics, and media stud­ies with experts from journ­al­ism, cul­ture and sci­ence communication.

ABOUT

Research and soci­ety seem to agree that Gen­er­at­ive AI is about to pro­foundly influ­ence our modes of com­mu­nic­a­tion, yet its pre­cise impact remains ambigu­ous. Is Tristan Har­ris and Ira Raskin’s (Cen­ter of Humane Tech­no­logy) asser­tion true that what nuc­le­ar weapons were to the phys­ic­al world, gen­er­at­ive AI is to the vir­tu­al and sym­bol­ic world? Or should we rather aim to rethink the inev­it­ab­il­ity of AI? Does gen­er­at­ive AI genu­inely chal­lenge tra­di­tion­al modes, chan­nels, and media of com­mu­nic­a­tion, or is it simply a con­triv­ance to bol­ster the pro­files and profits of tech con­glom­er­ates? Between the poles of this spec­trum, there remains a wide range of inter­pret­a­tions demand­ing nuanced analysis.

Grasp­ing the full scope of gen­er­at­ive AI’s impact is thus crit­ic­al. As gen­er­at­ive AI tools become cap­able of pro­du­cing or co-cre­at­ing it, the very notion of ‘con­tent’ is evolving – lead­ing to debates over author­ship, par­rot­ism and copy­right. Implic­a­tions of a rhet­or­ic­ally act­ing AI that con­vin­cingly mim­ics human out­put chal­lenge us deeply, spark­ing dis­cus­sions about ‘arti­fi­cial influ­ence’. With such a per­suas­ive AI’s poten­tial to dis­rupt power dynam­ics, espe­cially in polit­ic­al aren­as, its role in the 2024 glob­al elections—particularly the US elec­tion in Novem­ber—will serve as a crit­ic­al lit­mus test.

Rhet­or­ic­al ana­lys­is has long aimed to under­stand and eth­ic­ally nav­ig­ate the biases and motives of com­mu­nic­a­tion. Gen­er­at­ive AI intro­duces new prob­lems to this chal­lenge, yet the core issue per­sists: dis­cern­ing intent, interests and motiv­a­tions. As AI-gen­er­ated con­tent becomes more soph­ist­ic­ated, our rhet­or­ic­al veri­fic­a­tion meth­ods must also evolve, par­tic­u­larly to address pos­sible mali­cious human-AI collaborations.

SYMPOSIUM GOALS AND KEY TOPICS

The Cre­ation of Realities
What do gen­er­at­ive AI's cre­ations reveal about the nature of con­tent and know­ledge? We invite explor­a­tion into wheth­er AI-gen­er­ated images, texts, videos, and music are authen­t­ic arti­facts or con­tin­gent on human inter­ac­tion for their significance.
Prob­lems of persuasion
Does gen­er­at­ive AI threaten the status of human author­ship? To what extent does gen­er­at­ive AI use rhet­or­ic­al struc­tures? Dis­cus­sions should probe the rhet­or­ic­al strategies embed­ded in AI's imit­a­tions, the found­a­tions of lan­guage mod­els, and AI's capa­city for con­tex­tu­al under­stand­ing and persuasion.
AI’s world­view and ethics
How does gen­er­at­ive AI shape our world­view and eth­ic­al reas­on­ing (and vice versa)? We encour­age invest­ig­a­tion into the trust­wor­thi­ness of AI-gen­er­ated know­ledge, its philo­soph­ic­al implic­a­tions, and the poten­tial of AI to form future narratives.
Devel­op­ing a promptology
What could a study of promp­to­logy teach us about AI and human inter­ac­tion? Pro­pos­als could address the rela­tion­ship between AI out­puts and human com­mu­nic­a­tion, col­lab­or­at­ive pos­sib­il­it­ies, and strategies for integ­rat­ing human and AI contributions.
Gen­er­at­ive AI’s influ­ence on sci­ence communication
What implic­a­tions does gen­er­at­ive AI have for the com­mu­nic­a­tion of research in the sci­ences and the human­it­ies? We seek insight into the chal­lenges posed by AI's ‘black box’ nature and its effect on com­mu­nic­a­tion the­ory, meth­od­o­lo­gies, and practices.
Gen­er­at­ive AI and pub­lic discourse
How is gen­er­at­ive AI reshap­ing pub­lic com­mu­nic­a­tion and its study? Sub­mis­sions should con­sider how gen­er­at­ive AI redefines the socio-tech­no­cul­tur­al frame­work and the inter­dis­cip­lin­ary roles in cri­tiquing and shap­ing policy.
Pre­vi­ous
Next

PARTICIPATION

We invite schol­ars and prac­ti­tion­ers to par­ti­cip­ate in the sym­posi­um with sub­mis­sions for one of two formats, indi­vidu­al papers and prob­lems for small group discussions.

Key top­ics

  • The Cre­ation of Realities
  • Prob­lems of persuasion
  • AI’s world­view and ethics
  • Devel­op­ing a promptology
  • Gen­er­at­ive AI’s influ­ence on sci­ence communication
  • Gen­er­at­ive AI and pub­lic discourse

Oth­er top­ics regard­ing the inter­sec­tions of Machine Learn­ing and Com­mu­nic­a­tion, Rhet­or­ic and Gen­er­at­ive AI are welcome.

Paper sub­mis­sions
Abstracts of indi­vidu­al papers (30 minutes includ­ing dis­cus­sion) should con­tain the title and a sum­mary of the paper of max­im­ally 250 words along with the name of the speak­er and full con­tact address (includ­ing email address).

Prob­lem pitch sub­mis­sions
At the sym­posi­um we want to cre­ate space to dis­cuss your prob­lems related to the implic­a­tions of gen­er­at­ive AI for research and com­mu­nic­a­tion. Such prob­lems can be con­cep­tu­al­ized in a vari­ety of ways (inter-/trans­dis­cip­lin­ary, the­or­et­ic­al, prac­tic­al…) and should be posed with the aim of delib­er­a­tion, con­sulta­tion, and match­mak­ing for pos­sible solu­tions. You will be able to present your prob­lem in small groups through short pitches (1–2 min), ideally accom­pan­ied by mater­i­al for the dis­cus­sion par­ti­cipants. Prob­lems should be sub­mit­ted in no more than 100 words, with the submitter's name and full con­tact inform­a­tion (includ­ing email address).

Dead­line for abstracts and prob­lems: June 14, 2024

Noti­fic­a­tion of accept­ance: Early July 2024

If you have any ques­tions, please con­tact us by e‑mail at persuasivealgorithms@rhet.ai.

Days
Hours
Minutes
Seconds
Regis­tra­tion is closed. 

Organization and Contact

Cen­ter for Rhet­or­ic­al Sci­ence Com­mu­nic­a­tion Research on Arti­fi­cial Intel­li­gence (RHET AI)

Speak­er

Prof. Dr. Olaf Kramer
olaf.kramer@uni-tuebingen.de

Pro­ject Coordination

Dr. Markus Gott­schling
markus.gottschling@uni-tuebingen.de

Uni­ver­sity of Tübingen

Rhet­or­ic Depart­ment
Wil­helmstr. 50
72074 Tübin­gen