Skip to the content.

Robustness to Outliers in Bayesian Optimisation

Project ID: 2228cd1425 (You will need this ID for your application)

Under Offer

Research Theme: Mathematical Sciences

UCL Lead department: Statistical Science

Department Website

Lead Supervisor: Jeremias Knoblauch

Project Summary:

Robustness is essential in safety-critical applications like healthcare, finance, and precision engineering. Specifically, it is vital that outliers, heterogeneities, and minor variations in inputs should not cause major variations in outputs for these settings. Building algorithms that satisfy these conditions is challenging—particularly on large-scale data sets that are often collected and analysed in uncontrolled environments and in real time.

Here, we will develop methodology for robust Bayesian optimisation. Bayesian optimisation is a powerful and efficient optimisation tool used when an objective function is expensive to evaluate. It uses probabilistic models—typically Gaussian Processes—to capture the uncertainty associated with this objective. By iteratively choosing the next point at which to evaluate the objective, Bayesian optimisation balances exploration and exploitation to find a point as close as possible to the optimal solution. Relative to traditional optimisation methods, Bayesian optimisation is typically more resource-efficient and converges to the objective’s optimum more quickly.

To do this, we will build on a recent line of work that explores the use of so-called generalised Bayesian methods for robustness to outliers. The key insight underlying these methods is that standard methodology for Bayesian Optimisation makes a crucial assumption: that the posited model could in principle recover the true objective function. When this assumption is violated however, the basis for applying Bayes’ Rule is violated. In this all-too-common scenario, Bayesian Optimisation and its associated quantification of uncertainty becomes unreliable. By replacing the standard Bayesian update with a robust and loss-based alternative, this issue can be overcome.

In this PhD project, we will investigate which types of generalised Bayes updates lead to desirable robust Bayesian Optimisation algorithms through rigorous mathematical analysis and numerical experiments. A strong candidate will have a strong mathematical/statistical background, some good programming experience (preferably in Python), and a demonstrable interest in machine learning.