###Learning to convince? Large language models for human persuasion
Project ID: 2228bd1019 (You will need this ID for your application)
Research Theme: Artificial Intelligence and Robotics
UCL Lead department: Computer Science
Lead Supervisor: Lewis Griffin
Project Summary:
Human decision-making is based not only on evidence and argument but also on less logical persuasion using rhetorical techniques such as framing, sequencing, repetition, etc. Understanding persuasion better has malign (e.g. disinformation and population manipulation) and benign applications (e.g. countering malign uses - de-radicalization; encouraging desirable behaviour - healthy eating).
Traditional approaches to formal modelling of argumentation, rooted in logic and rule-based AI, are unsuitable to model and analyse rhetoric. Experimental psychological investigation is possible but is extremely difficult, slow and expensive; can only deal with small simple scenarios; yields small datasets; and can be ethically problematic.
We hypothesize (H) that Large Language Models (LLMs) provide a model of human response to arguments with sufficient fidelity to allow rhetorical persuasion to be investigated. H simultaneously raises the potential threat of this technology being used against the state and provides an opportunity to exploit it for benign aims.
In this PhD project the student would advance the use of LLMs as a model of response to argument, and as an automated means of producing effective arguments. It will use criminal trials as a test case. Criminal trials provide an excellent test case because of the existence and obtainability of written transcripts, their self-contained nature, and the complexity of their language and subject matter.