Safe and Secure Generative Artificial Intelligence
Project ID: 2228cd1299 (You will need this ID for your application)
Research Theme: Information and Communication Technologies
UCL Lead department: Electronic and Electrical Engineering (EEE)
Lead Supervisor: Miguel Rodrigues
Project Summary:
With the recent rise of generative artificial intelligence functionality including state-of-the-art large diffusion models such as DALLE2 or Stable Diffusion, it is now possible to create very realistic multimedia content at unprecedented scale and fidelity levels.
It is widely acknowledged that generative artificial intelligence technology is posed to transform various industries, contributing to added economical and societal benefits. However, it is also acknowledged that generative artificial intelligence poses various challenges: one growing concern is that malicious actors (with access to only modest computational resources) may easily hijack this technology to generate unauthorized or fraudulent content for distribution over the internet / social media, with the intent to propagate false information, manipulate society, blackmail individuals, break copyright standards or steal personal information.
These concerns — which have been widely voiced/covered by various actors in academia, industry, and media — are also at the forefront of the debate surrounding regulation of generative artificial intelligence technology (e.g. see US Congressional Hearing on AI Oversight).
Nonetheless, our ability to counteract potential misuse of generative artificial intelligence technology is immature, since existing techniques ranging from watermarking to deep fake detection or deep fake prevention are not robust to generative artificial intelligence frameworks. Therefore, there is a pressing need to develop entirely new technical assets enabling safe & secure deployment of generative artificial intelligence technology.
This PhD research project explores new approaches to immunize imagery content to prevent malicious generative artificial intelligence manipulation. The project explores 1/ information-Theoretic guarantees of multimedia Content Protection Against Generative AI Manipulation 2/ the design of Mechanisms for Multimedia Content Protection Against Generative AI Manipulation and 3/ the demonstration of the functionality in representative applications.
The successful applicants will have the opportunity to work in cutting-edge research at the intersection of artificial intelligence multimedia signal/image processing, with potential for real-world deployment