Beitragsbild zeigt Olaf Kramer auf der Bühne am Rednerpult bei seiner Keynote zu "Persuasive Machines" bei der AlphaPersuade Tagung.

Persuasive Machines — Review on AlphaPersuade @UC Irvine, USA

For three days, everything revolved around rhet­or­ic and AI research at the AlphaPer­suade sum­mit in Irvine, Cali­for­nia. From Feb­ru­ary 27 — 29, intens­ive and enthu­si­ast­ic dis­cus­sions took place, exchanges were made and new find­ings were developed.

Dur­ing the first two days, an inter­dis­cip­lin­ary group of rhet­or­icians, edu­cat­ors, lin­guists and neur­os­cient­ists debated and pondered how rhet­or­ic can be integ­rated into the devel­op­ment and imple­ment­a­tion of (gen­er­at­ive) AI. The over­arch­ing top­ic was a pos­sible frame­work for eth­ic­al per­sua­sion that can be tooled across AI tech­no­lo­gies. The RHET AI Cen­ter was rep­res­en­ted at the event by Dr. Markus Gott­schling and Prof. Dr. Olaf Kramer.

In his open­ing key­note "Per­suas­ive Machines" at the sym­posi­um on Feb­ru­ary 29, the lat­ter was able to illus­trate how rhet­or­ic­al tech­nic­al terms can also be used effect­ively in com­mu­nic­a­tion with an inter­dis­cip­lin­ary audi­ence. He showed how sci­entif­ic expert­ise from rhet­or­ic in rela­tion to per­sua­sion in AI tech­no­logy can also be applied and com­mu­nic­ated in industry and AI devel­op­ment. Olaf Kramer's key­note speech focused on the major chal­lenges posed by gen­er­at­ive AI from a rhet­or­ic­al per­spect­ive. For example, ques­tions of author­ship and respons­ib­il­ity or social implic­a­tions such as the avoid­ance of fake news, inform­a­tion over­load, but also the ques­tion of trans­par­ency and trust in gen­er­at­ive AI.

Olaf Kramer at the lectern at the symposium . Behind him his presentation on screen.
Olaf Kramer deliv­ers his open­ing key­note at the AlphaPer­suade sym­posi­um on Feb­ru­ary 29, 2024.

The two oth­er key­notes also offered prof­it­able and excit­ing insights and impulses: Tiera Tank­s­ley (UCLA) explored the ques­tion of how research regard­ing bias and hal­lu­cin­a­tions can offer solu­tions to some of the most per­sist­ent prob­lems in the use of AI. Casey Mock (Cen­ter for Humane Tech­no­logy) looked at the flip side of per­sua­sion: decep­tion. He asked how this can be countered at the devel­op­ment stage and how AI tech­no­logy can be designed to min­im­ize its abil­ity to deceive. The AlphaPer­suade sum­mit provided an excel­lent plat­form for intens­ive exchange and dis­cus­sion and left par­ti­cipants and organ­izers alike optim­ist­ic and motiv­ated at the end of the three days. The exchange will def­in­itely be con­tin­ued — we will keep you informed.