PreSpeech | Predicting speech: what and when does the brain predict during language comprehension?

Summary
The ability to recognise spoken words relies on extracting phonological features from the acoustic input that distinguish a word from its cohort competitors. Neuronal circuits were suggested to use context to predict and pre-activate specific speech features. Bottom-up feature recognition is enabled by phase and amplitude coupling between cortical oscillations and acoustic signals and this process is facilitated by top-down predictive processing. Predictions may be critical for the speed, accuracy and noise resistance that distinguish fluent speech recognition and were related to specific cortical oscillatory patterns (theta/delta synchronisation, beta increase). Despite evidence, the nature and the timing of these predictions remains unclear. This project will address these key questions using MEG and EEG that provide good spatiotemporal resolution. Increasing predictability of a word should have two effects: A) Neuronal populations encoding its phonological form will become active before this word is uniquely identifiable from its potential competitors by bottom-up analysis (uniqueness point UP). Using spatiotemporal multivariate pattern analysis we will test if the latency of phonological feature detection, used for word identification, is modulated by word's contextual predictability. If specific predictions are made, divergence should occur before the UP and should be supported by changes in oscillatory activity; B) Generating predictions should decrease bottom-up feature processing demands. We expect predictability to reduce the phase-amplitude coupling between the speech envelope and the gamma oscillations (a neuronal measure of phonological processing). In summary we aim to identify whether and how context enables predictions of incoming words’ form before it can be acoustically established. This will be critical for understanding the cortical architecture of speech processing with practical applications in artificial speech recognition.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/798971
Start date: 10-09-2018
End date: 30-12-2020
Total budget - Public funding: 170 121,60 Euro - 170 121,00 Euro
Cordis data

Original description

The ability to recognise spoken words relies on extracting phonological features from the acoustic input that distinguish a word from its cohort competitors. Neuronal circuits were suggested to use context to predict and pre-activate specific speech features. Bottom-up feature recognition is enabled by phase and amplitude coupling between cortical oscillations and acoustic signals and this process is facilitated by top-down predictive processing. Predictions may be critical for the speed, accuracy and noise resistance that distinguish fluent speech recognition and were related to specific cortical oscillatory patterns (theta/delta synchronisation, beta increase). Despite evidence, the nature and the timing of these predictions remains unclear. This project will address these key questions using MEG and EEG that provide good spatiotemporal resolution. Increasing predictability of a word should have two effects: A) Neuronal populations encoding its phonological form will become active before this word is uniquely identifiable from its potential competitors by bottom-up analysis (uniqueness point UP). Using spatiotemporal multivariate pattern analysis we will test if the latency of phonological feature detection, used for word identification, is modulated by word's contextual predictability. If specific predictions are made, divergence should occur before the UP and should be supported by changes in oscillatory activity; B) Generating predictions should decrease bottom-up feature processing demands. We expect predictability to reduce the phase-amplitude coupling between the speech envelope and the gamma oscillations (a neuronal measure of phonological processing). In summary we aim to identify whether and how context enables predictions of incoming words’ form before it can be acoustically established. This will be critical for understanding the cortical architecture of speech processing with practical applications in artificial speech recognition.

Status

CLOSED

Call topic

MSCA-IF-2017

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.2. Nurturing excellence by means of cross-border and cross-sector mobility
H2020-MSCA-IF-2017
MSCA-IF-2017