How do artificial agents localise sounds in the world?
Project ID: 2531bd1667
(You will need this ID for your application)
Research Theme: Artificial Intelligence and Robotics
Research Area(s):
Artificial intelligence technologies
Digital signal processing
Vision, hearing and other senses
UCL Lead department: Ear Institute
Lead Supervisor: Jennifer Bizley
Project Summary:
Our ability to pinpoint where a sound comes from in space is critical not only for survival, but also for listening to speech in noisy environments. Most studies of sound localisation, and AI approaches to modelling the auditory system, fail to replicate the dynamic nature of real-world listening in which sounds move through space, and listeners also move, generating dynamic localisation cues as they do so.
Our goal is to use deep learning to understand how artificial agents estimate the location of sounds including how rapidly and effectively neural networks can estimate source location in world-centered reference frames. We will compare unit responses across network layers to neural activity recorded in the auditory cortex, thus compare biological approaches to solving this problem to those obtained from deep learning.
You will be supervised by Professor Jennifer Bizley, whose lab has designed and implemented the experimental paradigms and collected the relevant biological data, and Professor Nick Lesica who is expert in using deep learning to understand central auditory processing. Your role will be to build networks to estimate sound source location in the world, using the acoustic cues available at the ear, velocity and head-direction signals. There is the potential to participate in data collection to test predictions that your models generate. You will have expertise in computational neuroscience, computer science or biomedical engineering. Experience in deep learning, signal processing and/or analysing neural signals would be an advantage.
This project will result in a better understanding of how the brain maps sounds in the world. This fundamental knowledge is critical for developing cortical implants to restore hearing, enhancing virtual and augmented reality and the AI algorithms developed could be integrated into hearing aids or cochlear implants.