Detecting Fallacies in Argumentative Text with Computational Argumentation
Project ID: 2531bd1670
(You will need this ID for your application)
Research Theme: Artificial Intelligence and Robotics
Research Area(s):
Artificial intelligence technologies
Human communication in information and communication technologies
Natural language processing
UCL Lead department: Information Studies
Lead Supervisor: Antonis Bikakis
Project Summary:
This PhD project is for students interested in how people reason, argue, and sometimes manipulate information. Applicants should hold (or be close to completing) an MSc in Computer Science, Data Science, or a closely related discipline. The ideal candidate will have a background and experience in one or more of the following areas: formal argumentation, knowledge representation, argument mining or machine learning for NLP. Applicants should be comfortable with both conceptual and technical work, including formal modelling and computational experimentation.
The PhD will investigate how computational argumentation and natural language processing (NLP) can be used to detect fallacies, i.e. reasoning errors that make arguments invalid or misleading. Fallacies are common in everyday discourse and can be used, intentionally or not, to distort debate and influence opinion. Automatically identifying them is a key challenge for AI. Fallacies are typically divided into formal and informal types. Formal fallacies arise from flaws in logical structure and can often be handled through existing formal logics. Informal fallacies, however, such as false analogies, weak generalisations, claims supported by weak or irrelevant premises, or false dichotomies, depend on context and subtle errors in reasoning, making them much harder to capture computationally.
This project will build on advances in argument mining (extracting arguments from text) and formal argumentation (structured, logical methods of constructing and evaluating arguments). Argumentative structures extracted from natural language text will be mapped into formal argumentation frameworks capable of representing contextual factors such as trust, the topic of an argument and a dialogue, and audience values and beliefs. Within these frameworks, the project will explore how different types of informal fallacies can be represented and detected algorithmically.
Possible research directions include:
(a) to formally characterise different types of informal fallacies within argumentation frameworks; (b) to develop formal principles and criteria for rational, fallacy-free argumentation; (c) to design algorithms for detecting fallacious patterns in argumentative structures; and (d) to evaluate the accuracy and robustness of the proposed methods on real-world argumentative corpora. This project offers an opportunity to advance explainable AI and contribute to combating misinformation and strengthening public reasoning.