Summary
To interact effectively with the complex dynamic and multisensory world (e.g. traffic) the brain needs to transform the barrage of signals into a coherent percept. This requires it to solve the causal inference or binding problem - deciding which signals come from common sources and integrating those accordingly. Doing so exactly (i.e. optimally) is wildly computationally intractable for all but the simplest laboratory scenes. It is unknown how the brain computes approximate solutions for realistic scenes in the face of resource constraints.
This ambitious interdisciplinary project combines statistical, computational, behavioural and neuroimaging (3/7T-fMRI, MEG/EEG, TMS) methods to determine how, and how well, the brain solves the causal inference problem in progressively richer multisensory environments.
The key hypothesis is that observers compute approximate solutions by sequentially selecting subsets of signals for perceptual integration via attentional and active sensing mechanisms guided by the perceptual tasks they are executing, their prior expectations about the world’s causal structure, and bottom-up salience maps. I will build parallel normative/approximate Bayesian and transformer network models of these processes and combine those with behaviour and neuroimaging to unravel the neurocomputational mechanisms.
The project will develop a novel computational and neuromechanistic account of causal inference in more realistic multisensory scenes, addressing fundamental questions about binding, inference and probabilistic computations. By bringing lab research closer to the real world it will radically alter our perspectives - shifting from near-optimal passive perception in simple scenes to active information gathering in the service of approximate solutions in more realistic scenes. It has the potential to inspire new AI algorithms and drive transformative insights into the perceptual difficulties older and clinical populations face in the real-world.
This ambitious interdisciplinary project combines statistical, computational, behavioural and neuroimaging (3/7T-fMRI, MEG/EEG, TMS) methods to determine how, and how well, the brain solves the causal inference problem in progressively richer multisensory environments.
The key hypothesis is that observers compute approximate solutions by sequentially selecting subsets of signals for perceptual integration via attentional and active sensing mechanisms guided by the perceptual tasks they are executing, their prior expectations about the world’s causal structure, and bottom-up salience maps. I will build parallel normative/approximate Bayesian and transformer network models of these processes and combine those with behaviour and neuroimaging to unravel the neurocomputational mechanisms.
The project will develop a novel computational and neuromechanistic account of causal inference in more realistic multisensory scenes, addressing fundamental questions about binding, inference and probabilistic computations. By bringing lab research closer to the real world it will radically alter our perspectives - shifting from near-optimal passive perception in simple scenes to active information gathering in the service of approximate solutions in more realistic scenes. It has the potential to inspire new AI algorithms and drive transformative insights into the perceptual difficulties older and clinical populations face in the real-world.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101096659 |
Start date: | 01-04-2024 |
End date: | 31-03-2029 |
Total budget - Public funding: | 2 499 527,00 Euro - 2 499 527,00 Euro |
Cordis data
Original description
To interact effectively with the complex dynamic and multisensory world (e.g. traffic) the brain needs to transform the barrage of signals into a coherent percept. This requires it to solve the causal inference or binding problem - deciding which signals come from common sources and integrating those accordingly. Doing so exactly (i.e. optimally) is wildly computationally intractable for all but the simplest laboratory scenes. It is unknown how the brain computes approximate solutions for realistic scenes in the face of resource constraints.This ambitious interdisciplinary project combines statistical, computational, behavioural and neuroimaging (3/7T-fMRI, MEG/EEG, TMS) methods to determine how, and how well, the brain solves the causal inference problem in progressively richer multisensory environments.
The key hypothesis is that observers compute approximate solutions by sequentially selecting subsets of signals for perceptual integration via attentional and active sensing mechanisms guided by the perceptual tasks they are executing, their prior expectations about the world’s causal structure, and bottom-up salience maps. I will build parallel normative/approximate Bayesian and transformer network models of these processes and combine those with behaviour and neuroimaging to unravel the neurocomputational mechanisms.
The project will develop a novel computational and neuromechanistic account of causal inference in more realistic multisensory scenes, addressing fundamental questions about binding, inference and probabilistic computations. By bringing lab research closer to the real world it will radically alter our perspectives - shifting from near-optimal passive perception in simple scenes to active information gathering in the service of approximate solutions in more realistic scenes. It has the potential to inspire new AI algorithms and drive transformative insights into the perceptual difficulties older and clinical populations face in the real-world.
Status
SIGNEDCall topic
ERC-2022-ADGUpdate Date
12-03-2024
Images
No images available.
Geographical location(s)