MECHIDENT | Who is that? Neural networks and mechanisms for identifying individuals

Summary
Our social interactions and survival critically depend on identifying specific individuals to interact with or avoid (“who is that?”). Identifying individuals can be achieved by different sensory inputs, and by many accounts any sensory input elicits a representation of an individual that somehow becomes transmodal or independent of any sensory system. However, how the brain achieves transmodal integration facilitating individual recognition remains a mystery: Investigations in humans allowing direct access to site-specific neuronal processes are generally rare and have not focused on understanding neuronal multisensory integration for person recognition. Also, animal models to study the neuronal mechanisms of related processes have only recently become known. I propose to use direct recordings of neuronal activity in both humans and monkeys during face- and voice-identification tasks, combined with site-specific manipulation of the sensory input streams into the lateral anterior temporal lobe (ATL). The ATL brings together identity-specific content from the senses but the neuronal mechanisms for this convergence are entirely unknown. My core hypothesis is that auditory voice- or visual face-identity input into key ATL convergence sites elicits a sensory-modality invariant representation, which once elicited is robust to degradation or inactivation of neuronal input from the other sense. The central aim is to test this in human patients being monitored for surgery and to directly compare and link the results with those in monkeys where the neuronal circuit and mechanisms can be revealed using optogenetic control of neuronal responses. Analyses will assess neuronal dynamics and sensory integration frameworks. This proposal is poised to unravel how the brain combines multisensory input critical for identifying individuals and cognitive operations to act upon. The basic science insights gained may inform efforts to stratify patients with different types of ATL damage.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/724198
Start date: 01-07-2017
End date: 30-06-2023
Total budget - Public funding: 1 995 677,00 Euro - 1 995 677,00 Euro
Cordis data

Original description

Our social interactions and survival critically depend on identifying specific individuals to interact with or avoid (“who is that?”). Identifying individuals can be achieved by different sensory inputs, and by many accounts any sensory input elicits a representation of an individual that somehow becomes transmodal or independent of any sensory system. However, how the brain achieves transmodal integration facilitating individual recognition remains a mystery: Investigations in humans allowing direct access to site-specific neuronal processes are generally rare and have not focused on understanding neuronal multisensory integration for person recognition. Also, animal models to study the neuronal mechanisms of related processes have only recently become known. I propose to use direct recordings of neuronal activity in both humans and monkeys during face- and voice-identification tasks, combined with site-specific manipulation of the sensory input streams into the lateral anterior temporal lobe (ATL). The ATL brings together identity-specific content from the senses but the neuronal mechanisms for this convergence are entirely unknown. My core hypothesis is that auditory voice- or visual face-identity input into key ATL convergence sites elicits a sensory-modality invariant representation, which once elicited is robust to degradation or inactivation of neuronal input from the other sense. The central aim is to test this in human patients being monitored for surgery and to directly compare and link the results with those in monkeys where the neuronal circuit and mechanisms can be revealed using optogenetic control of neuronal responses. Analyses will assess neuronal dynamics and sensory integration frameworks. This proposal is poised to unravel how the brain combines multisensory input critical for identifying individuals and cognitive operations to act upon. The basic science insights gained may inform efforts to stratify patients with different types of ATL damage.

Status

SIGNED

Call topic

ERC-2016-COG

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2016
ERC-2016-COG