Summary
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sound processing implemented in the brain? From the sensory periphery to the cortex, complex information is extracted from the acoustic content of sounds. This feedforward processing is complemented by extensive feedback (FB) processing. Current research suggests that FB confers flexibility to auditory perception, by generating predictions of the input and supplementing noisy or missing information with contextual information. So far, limitations in coverage and spatial resolution of non-invasive imaging methods have prohibited grounding contextual sound processing onto the fundamental computational units of the human brain, resulting in an incomplete understanding of its biological underpinnings. Here, I propose to use ultra-high-field (UHF) functional magnetic resonance imaging (fMRI) to study how expectations shape human hearing. The high spatial resolution and coverage of UHF-fMRI will allow examining fundamental brain units: small subcortical structures and layers of cortex. I will investigate how responses change when acoustic information needs to be prioritized in an uncertain or noisy soundscape. Complementing UHF-fMRI measurements with magnetoencephalography (MEG), I will derive a neurobiological model of contextual sound processing at high spatial and temporal resolution. By comparing this model to state-of-the-art artificial intelligence, I will generalize it to naturalistic settings. This project links algorithmic and at a mesoscopic implementation levels to reveal the neurobiological mechanisms supporting hearing in context. The resulting model will allow testing hypotheses of aberrant contextual processing in phantom hearing (tinnitus and auditory hallucinations).
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101001270 |
Start date: | 01-07-2021 |
End date: | 30-06-2026 |
Total budget - Public funding: | 1 900 000,00 Euro - 1 900 000,00 Euro |
Cordis data
Original description
How do we understand the complex, noisy, and often incomplete sounds that reach our ears? Sounds are not perceived in isolation. The context in which we encounter them allows predicting what we may hear next. How is contextual sound processing implemented in the brain? From the sensory periphery to the cortex, complex information is extracted from the acoustic content of sounds. This feedforward processing is complemented by extensive feedback (FB) processing. Current research suggests that FB confers flexibility to auditory perception, by generating predictions of the input and supplementing noisy or missing information with contextual information. So far, limitations in coverage and spatial resolution of non-invasive imaging methods have prohibited grounding contextual sound processing onto the fundamental computational units of the human brain, resulting in an incomplete understanding of its biological underpinnings. Here, I propose to use ultra-high-field (UHF) functional magnetic resonance imaging (fMRI) to study how expectations shape human hearing. The high spatial resolution and coverage of UHF-fMRI will allow examining fundamental brain units: small subcortical structures and layers of cortex. I will investigate how responses change when acoustic information needs to be prioritized in an uncertain or noisy soundscape. Complementing UHF-fMRI measurements with magnetoencephalography (MEG), I will derive a neurobiological model of contextual sound processing at high spatial and temporal resolution. By comparing this model to state-of-the-art artificial intelligence, I will generalize it to naturalistic settings. This project links algorithmic and at a mesoscopic implementation levels to reveal the neurobiological mechanisms supporting hearing in context. The resulting model will allow testing hypotheses of aberrant contextual processing in phantom hearing (tinnitus and auditory hallucinations).Status
SIGNEDCall topic
ERC-2020-COGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)