Summary
The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues: linguistic (such as lexical, syntactic), but also discourse, prosody, face and hands (gestures). Yet, our understanding of how language is learnt and processed, and its associated neural circuitry, comes almost exclusively from reductionist approaches in which the multimodal signal is reduced to speech or text. ECOLANG will pioneer a new way to study language comprehension and learning using a real-world approach in which language is analysed in its rich face-to-face multimodal environment (i.e., language’s ecological niche). Experimental rigour is not compromised by the use of innovative technologies (combining automatic, manual and crowdsourcing methods for annotation; creating avatar stimuli for our experiments) and state-of-the-art modelling and data analysis (probabilistic modelling and network-based analyses). ECOLANG studies how the different cues available in face-to-face communication dynamically contribute to processing and learning in adults, children and aphasic patients in contexts representative of everyday conversation. We collect and annotate a corpus of naturalistic language which is then used to derive quantitative informativeness measures for each cue and their combination using computational models, tested and refined on the basis of behavioural and neuroscientific data. We use converging methodologies (behavioural, EEG, fMRI and lesion-symptom mapping) and we investigate different populations (3-4 years old children, healthy and aphasic adults) in order to develop mechanistic accounts of multimodal communication at the cognitive as well as neural level that can explain processing and learning (by both children and adults) and can have impact on the rehabilitation of language functions after stroke.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/743035 |
Start date: | 01-01-2018 |
End date: | 30-06-2024 |
Total budget - Public funding: | 2 243 584,00 Euro - 2 243 584,00 Euro |
Cordis data
Original description
The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues: linguistic (such as lexical, syntactic), but also discourse, prosody, face and hands (gestures). Yet, our understanding of how language is learnt and processed, and its associated neural circuitry, comes almost exclusively from reductionist approaches in which the multimodal signal is reduced to speech or text. ECOLANG will pioneer a new way to study language comprehension and learning using a real-world approach in which language is analysed in its rich face-to-face multimodal environment (i.e., language’s ecological niche). Experimental rigour is not compromised by the use of innovative technologies (combining automatic, manual and crowdsourcing methods for annotation; creating avatar stimuli for our experiments) and state-of-the-art modelling and data analysis (probabilistic modelling and network-based analyses). ECOLANG studies how the different cues available in face-to-face communication dynamically contribute to processing and learning in adults, children and aphasic patients in contexts representative of everyday conversation. We collect and annotate a corpus of naturalistic language which is then used to derive quantitative informativeness measures for each cue and their combination using computational models, tested and refined on the basis of behavioural and neuroscientific data. We use converging methodologies (behavioural, EEG, fMRI and lesion-symptom mapping) and we investigate different populations (3-4 years old children, healthy and aphasic adults) in order to develop mechanistic accounts of multimodal communication at the cognitive as well as neural level that can explain processing and learning (by both children and adults) and can have impact on the rehabilitation of language functions after stroke.Status
SIGNEDCall topic
ERC-2016-ADGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)