Skip to the content.

How do artificial agents localise sounds in the world?

Project ID: 2531ad1538

(You will need this ID for your application)

Research Theme: Information and Communication Technologies

UCL Lead department: Ear Institute

Department Website

Lead Supervisor: Jennifer Bizley

Project Summary:

Our ability to pinpoint where a sound comes from in space is critical not only for survival, but also for listening to speech in noisy environments. Most studies of sound localisation place static listeners within a ring of fixed speakers. Yet in the real world, sounds move through space, and listeners also move, generating dynamic localisation cues as they do so. To advance our understanding of how the brain supports listening in such scenarios we have designed an environment in which animals are able to navigate based on sounds while we record the acoustic signals available at each ear, head and eye position, and neural activity.

Our goal is to use deep learning to understand how artificial agents estimate the location of sounds including how rapidly and effectively neural networks can estimate source location in the world. We will extend this approach to develop models that predict the recorded neural activity in auditory cortex and compare the biological receptive fields to those obtained from deep learning.

You will be supervised by Professor Jennifer Bizley, whose lab has designed and implemented the experimental paradigms, and Professor Nick Lesica who is expert in using deep learning to understand central auditory processing. Your role will be to build networks to estimate sound source location, as well as to participate in data collection allowing you to test predictions that your models generate. You will have expertise in computational neuroscience, computer science or biomedical engineering. Experience in deep learning, signal processing and/or analysing neural signals would be an advantage.

This project will result in a better understanding of how the brain maps sounds in the world. This fundamental knowledge is critical for developing cortical implants to restore hearing, while the AI algorithms developed could be integrated into hearing aids or cochlear implants.