Summary
With the rise of urbanization, silence has become a rarity. Sound is all around us, and our hearing skills are essential in everyday life. Spatial hearing is one of these skills: We use sound localization to determine where something is happening in our surroundings, or to ‘zoom in’ on a friend’s voice and filter out the noise background in the bar. But how does the brain compute the location of real-life, complex sounds such as a voice? Knowledge of these neural computational mechanisms is crucial to develop remedies for when spatial hearing fails, such as in hearing loss (>34 million EU citizens). Hearing impaired (HI) listeners experience great difficulties with understanding speech in everyday, noisy environments despite the use of an assistive hearing device like a cochlear implant (CI). Their difficulties are partially caused by reduced spatial hearing, which hampers filtering out a specific sound such as a voice based on its position. The resulting communication problems impact personal wellbeing as well as the economy (e.g. higher unemployment rates). In SOLOC, I use an innovative, intersectional approach combining cutting-edge computational modelling (deep neural networks) with state-of-the-art neuroscience and clinical audiology to gain insight into the brain mechanisms underpinning sound localization. Using this knowledge, I explore signal processing strategies for CIs that boost spatial encoding in the brain to improve speech-in-noise understanding. Through this Global Fellowship, I connect the unique computational expertise of Prof. Mesgarani (Columbia University) and his experience with translating computational neuroscience into clinical applications, to the exceptional medical expertise on hearing loss and CIs of Prof. Kremer (Maastricht University). Hence, by implementing SOLOC I will diversify myself into a multidisciplinary, independent researcher operating at the interface of neuroscience, computational modelling, and clinical audiology.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/898134 |
Start date: | 01-06-2020 |
End date: | 31-05-2022 |
Total budget - Public funding: | 182 419,20 Euro - 182 419,00 Euro |
Cordis data
Original description
With the rise of urbanization, silence has become a rarity. Sound is all around us, and our hearing skills are essential in everyday life. Spatial hearing is one of these skills: We use sound localization to determine where something is happening in our surroundings, or to ‘zoom in’ on a friend’s voice and filter out the noise background in the bar. But how does the brain compute the location of real-life, complex sounds such as a voice? Knowledge of these neural computational mechanisms is crucial to develop remedies for when spatial hearing fails, such as in hearing loss (>34 million EU citizens). Hearing impaired (HI) listeners experience great difficulties with understanding speech in everyday, noisy environments despite the use of an assistive hearing device like a cochlear implant (CI). Their difficulties are partially caused by reduced spatial hearing, which hampers filtering out a specific sound such as a voice based on its position. The resulting communication problems impact personal wellbeing as well as the economy (e.g. higher unemployment rates). In SOLOC, I use an innovative, intersectional approach combining cutting-edge computational modelling (deep neural networks) with state-of-the-art neuroscience and clinical audiology to gain insight into the brain mechanisms underpinning sound localization. Using this knowledge, I explore signal processing strategies for CIs that boost spatial encoding in the brain to improve speech-in-noise understanding. Through this Global Fellowship, I connect the unique computational expertise of Prof. Mesgarani (Columbia University) and his experience with translating computational neuroscience into clinical applications, to the exceptional medical expertise on hearing loss and CIs of Prof. Kremer (Maastricht University). Hence, by implementing SOLOC I will diversify myself into a multidisciplinary, independent researcher operating at the interface of neuroscience, computational modelling, and clinical audiology.Status
TERMINATEDCall topic
MSCA-IF-2019Update Date
28-04-2024
Images
No images available.
Geographical location(s)