Skip to the content.

Computational threat assessments: The relationship between online threats and real-world action

Project ID: 2531bd1704

(You will need this ID for your application)

Research Theme: Digital Security and Resilience

Research Area(s): digital security and resilience

UCL Lead department: Security and Crime Science

Department Website

Lead Supervisor: Paul Gill

Project Summary:

Governments and law enforcement agencies rely on effective threat assessment tools to address terrorism, mass shootings, and other forms of violence. Increasingly these tools are asked to assess the likelihood of threats originating in digital spaces translating into real-world violence. However, the science has not kept pace with the rate of change evident in practitioner caseloads. This project has the potential to help authorities pre-emptively identify, triage, prevent and disrupt risk via the testing, and validation of various predictive and computational models. This could advance the state-of-the-art in AI and natural language processing (NLP), especially in sentiment analysis, anomaly detection, and contextual understanding. A necessary aspect of this thesis also involves the exploration of how computational threat assessment tools can be developed and used ethically, avoiding misuse or discrimination. The chosen student will review studies on psychological and crime science underpinnings of online threats. They will investigate existing computational tools and algorithms for sentiment analysis, NLP, and threat detection including the use of psycholinguistic dictionaries (e.g. the Grievance dictionary).

Projects could involve temporal examinations of rich case studies where online threats have escalated into real-world incidents. The project will involve the collection, and cleaning of data from multiple stakeholder partners, and the pre-processing of textual data for computational analysis. Empirical analyses could include any mixture of the following:

(1) Using machine learning techniques to develop predictive models for identifying credible threats

(2) Applying NLP techniques (e.g., sentiment analysis, topic modeling) to assess the content of online posts

(3) Integrating behavioural patterns, historical data, and context into the model for better accuracy

(4) Testing the model’s ability to correlate online activity with real-world actions.