Summary
This interdisciplinary project will define an integrated model of speech processing by recording, modelling and manipulating neural oscillatory dynamics during perception of speech defined as a multiscale temporal signal. Dominant models of speech perception describe its underlying neural mechanisms at a static neuroanatomical level, neglecting the cognitive algorithmic and neural dynamic levels of description. These latter can only be investigated by considering the temporal dimension of speech, which is structured according to a hierarchy of linguistic timescales (phoneme, syllable, word, phrase). Recent advances in behavioural paradigms, computational modelling, and neuroimaging data analysis make it now possible to investigate the cognitive algorithms and neural dynamics subtending the processing of speech. To define an integrated model of speech perception, this project seeks to: 1. record neural activity in humans with magnetoencephalography and intracranial recordings during perception of continuous speech; 2. quantify linguistic information at each timescale of speech with a computational model; and 3. estimate their respective and shared neural correlates with multivariate and directed connectivity analyses. Feasibility is ensured by an in-house access to neuroimaging and intracranial recordings as exemplified in the data on Figure 1 of this proposal. This project will critically test whether neural oscillations play a fundamental role in the computational processes of perception and cognition. It will define the mapping between speech and neural timescales and reveal how information is transferred and combined along the linguistic computational processing hierarchy. It will overall specify -in terms of the nature of the information processed and of the dynamical hierarchical organization- the respective contributions of left and right hemispheric ventral and dorsal auditory pathways in speech processing.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101043344 |
Start date: | 01-10-2022 |
End date: | 30-09-2027 |
Total budget - Public funding: | 1 861 100,00 Euro - 1 861 100,00 Euro |
Cordis data
Original description
This interdisciplinary project will define an integrated model of speech processing by recording, modelling and manipulating neural oscillatory dynamics during perception of speech defined as a multiscale temporal signal. Dominant models of speech perception describe its underlying neural mechanisms at a static neuroanatomical level, neglecting the cognitive algorithmic and neural dynamic levels of description. These latter can only be investigated by considering the temporal dimension of speech, which is structured according to a hierarchy of linguistic timescales (phoneme, syllable, word, phrase). Recent advances in behavioural paradigms, computational modelling, and neuroimaging data analysis make it now possible to investigate the cognitive algorithms and neural dynamics subtending the processing of speech. To define an integrated model of speech perception, this project seeks to: 1. record neural activity in humans with magnetoencephalography and intracranial recordings during perception of continuous speech; 2. quantify linguistic information at each timescale of speech with a computational model; and 3. estimate their respective and shared neural correlates with multivariate and directed connectivity analyses. Feasibility is ensured by an in-house access to neuroimaging and intracranial recordings as exemplified in the data on Figure 1 of this proposal. This project will critically test whether neural oscillations play a fundamental role in the computational processes of perception and cognition. It will define the mapping between speech and neural timescales and reveal how information is transferred and combined along the linguistic computational processing hierarchy. It will overall specify -in terms of the nature of the information processed and of the dynamical hierarchical organization- the respective contributions of left and right hemispheric ventral and dorsal auditory pathways in speech processing.Status
SIGNEDCall topic
ERC-2021-COGUpdate Date
09-02-2023
Images
No images available.
Geographical location(s)