The Center for Rhetorical Science Communication Research on Artificial Intelligence (RHET AI) in cooperation with the Max Planck Institute for Intelligent Systems Tübingen/Stuttgart (MPI-IS) is organizing an interdisciplinary symposium on the foundations, functionalities, and communicative implications of Generative AI. The symposium will take place on November 12–14, 2024, at the MPI-IS in Tübingen, and the scientific discussion will be accompanied by a public Science Notes event as part of the Science and Innovation Days of the University of Tübingen.
In Tübingen, we want to discuss and navigate expected changes in communication through the advent of generative AI. We aim to address interdisciplinary questions and problems about the influence of generative AI, its technical and communicative parameters, and possible societal developments. The event will emphasize the role of generative AI as a bridge between theory and practice in AI research and the humanities.
To achieve this, we aim to bring together researchers from a diverse set of disciplines such as machine learning, cognitive science, philosophy of language, rhetoric, linguistics, and media studies with experts from journalism, culture and science communication.
Research and society seem to agree that Generative AI is about to profoundly influence our modes of communication, yet its precise impact remains ambiguous. Is Tristan Harris and Ira Raskin’s (Center of Humane Technology) assertion true that what nuclear weapons were to the physical world, generative AI is to the virtual and symbolic world? Or should we rather aim to rethink the inevitability of AI? Does generative AI genuinely challenge traditional modes, channels, and media of communication, or is it simply a contrivance to bolster the profiles and profits of tech conglomerates? Between the poles of this spectrum, there remains a wide range of interpretations demanding nuanced analysis.
Grasping the full scope of generative AI’s impact is thus critical. As generative AI tools become capable of producing or co-creating it, the very notion of ‘content’ is evolving – leading to debates over authorship, parrotism and copyright. Implications of a rhetorically acting AI that convincingly mimics human output challenge us deeply, sparking discussions about ‘artificial influence’. With such a persuasive AI’s potential to disrupt power dynamics, especially in political arenas, its role in the 2024 global elections—particularly the US election in November—will serve as a critical litmus test.
Rhetorical analysis has long aimed to understand and ethically navigate the biases and motives of communication. Generative AI introduces new problems to this challenge, yet the core issue persists: discerning intent, interests and motivations. As AI-generated content becomes more sophisticated, our rhetorical verification methods must also evolve, particularly to address possible malicious human-AI collaborations.
We invite scholars and practitioners to participate in the symposium with submissions for one of two formats, individual papers and problems for small group discussions.
Key topics
Other topics regarding the intersections of Machine Learning and Communication, Rhetoric and Generative AI are welcome.
Paper submissions
Abstracts of individual papers (30 minutes including discussion) should contain the title and a summary of the paper of maximally 250 words along with the name of the speaker and full contact address (including email address).
Problem pitch submissions
At the symposium we want to create space to discuss your problems related to the implications of generative AI for research and communication. Such problems can be conceptualized in a variety of ways (inter-/transdisciplinary, theoretical, practical…) and should be posed with the aim of deliberation, consultation, and matchmaking for possible solutions. You will be able to present your problem in small groups through short pitches (1–2 min), ideally accompanied by material for the discussion participants. Problems should be submitted in no more than 100 words, with the submitter's name and full contact information (including email address).
Deadline for abstracts and problems: June 14, 2024
Notification of acceptance: Early July 2024
If you have any questions, please contact us by e‑mail at persuasivealgorithms@rhet.ai.
Center for Rhetorical Science Communication Research on Artificial Intelligence (RHET AI)
Speaker
Prof. Dr. Olaf Kramer
olaf.kramer@uni-tuebingen.de
Project Coordination
Dr. Markus Gottschling
markus.gottschling@uni-tuebingen.de
University of Tübingen
Rhetoric Department
Wilhelmstr. 50
72074 Tübingen