Skip to the content.

Agent Based Modelling of False Belief Dissemination

Project ID: 2531ad1581

(You will need this ID for your application)

Research Theme: Digital Security and Resilience

UCL Lead department: Security and Crime Science

Department Website

Lead Supervisor: Paul Gill

Project Summary:

This thesis will develop agent-based simulations to examine the dynamic process of how people develop false beliefs. Building on Pilditch et al (2021), who established a simulation that models the formation of beliefs among agents by assessing source credibility and misinformation within a network, the intensity of the misinformation cues will be modified from an objective variable belonging to the information itself to a subjective variable belonging to each agent. This will be achieved by using individual beliefs instead of average beliefs among agents, thus reflecting the confirmatory bias among agents which is observed in both ordinary and misinformed individuals (Frey, 1986; Gagliardi, 2023). Additionally, a meta-analysis of existing studies on risk factors for false belief acceptance will be conducted which will, in turn, inform the initial distribution of the misinformation cue sensitivity and misinforming broadcasters will be introduced to assume the distinct level of individual gullibility and the different levels of availability of implausible content, respectively. This simulation will reveal what ratio of agents obtain misinformed beliefs according to different initial sensitivities to misinformation and pathways to misinformation in the environment. The results obtained from the model will be validated via data fitting with cross-country surveys, such as the one conducted elsewhere. In addition, this model will be capable of assessing the effects of possible interventions through modification of parameters, for example, changing the distribution of initial sensitivity to misinformation as the effect of inoculation (Lewandowsky & van der Linden, 2021) or the ratio of misinformation broadcasters as the effect of content moderation (Morrow et al., 2022).