ORIENT | Goal-directed eye-head coordination in dynamic multisensory environments

Summary
Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programming and generating of an eye-head gaze-orienting response to a selected goal. How do normal and sensory-impaired brains decide which signals to integrate (“goal”), or suppress (“distracter”)?
Audiovisual (AV) integration only helps for spatially and temporally aligned stimuli. However, sensory inputs differ markedly in their reliability, reference frames, and processing delays, yielding considerable spatial-temporal uncertainty to the brain. Vision and audition utilize coordinates that misalign whenever eyes and head move. Meanwhile, their sensory acuities vary across space and time in essentially different ways. As a result, assessing AV alignment poses major computational problems, which so far have only been studied for the simplest stimulus-response conditions.
My groundbreaking approaches will tackle these problems on different levels, by applying dynamic eye-head coordination paradigms in complex environments, while systematically manipulating visual-vestibular-auditory context and uncertainty. I parametrically vary AV goal/distracter statistics, stimulus motion, and active vs. passive-evoked body movements. We perform advanced psychophysics to healthy subjects, and to patients with well-defined sensory disorders. We probe sensorimotor strategies of normal and impaired systems, by quantifying their acquisition of priors about the (changing) environment, and use of feedback about active or passive-induced self-motion of eyes and head.
I challenge current eye-head control models by incorporating top-down adaptive processes and eye-head motor feedback in realistic cortical-midbrain networks. Our modeling will be critically tested on an autonomously learning humanoid robot, equipped with binocular foveal vision and human-like audition.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/693400
Start date: 01-01-2017
End date: 31-12-2022
Total budget - Public funding: 2 523 438,00 Euro - 2 523 438,00 Euro
Cordis data

Original description

Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programming and generating of an eye-head gaze-orienting response to a selected goal. How do normal and sensory-impaired brains decide which signals to integrate (“goal”), or suppress (“distracter”)?
Audiovisual (AV) integration only helps for spatially and temporally aligned stimuli. However, sensory inputs differ markedly in their reliability, reference frames, and processing delays, yielding considerable spatial-temporal uncertainty to the brain. Vision and audition utilize coordinates that misalign whenever eyes and head move. Meanwhile, their sensory acuities vary across space and time in essentially different ways. As a result, assessing AV alignment poses major computational problems, which so far have only been studied for the simplest stimulus-response conditions.
My groundbreaking approaches will tackle these problems on different levels, by applying dynamic eye-head coordination paradigms in complex environments, while systematically manipulating visual-vestibular-auditory context and uncertainty. I parametrically vary AV goal/distracter statistics, stimulus motion, and active vs. passive-evoked body movements. We perform advanced psychophysics to healthy subjects, and to patients with well-defined sensory disorders. We probe sensorimotor strategies of normal and impaired systems, by quantifying their acquisition of priors about the (changing) environment, and use of feedback about active or passive-induced self-motion of eyes and head.
I challenge current eye-head control models by incorporating top-down adaptive processes and eye-head motor feedback in realistic cortical-midbrain networks. Our modeling will be critically tested on an autonomously learning humanoid robot, equipped with binocular foveal vision and human-like audition.

Status

CLOSED

Call topic

ERC-ADG-2015

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2015
ERC-2015-AdG
ERC-ADG-2015 ERC Advanced Grant