Summary
Bayesian inference optimally estimates probabilities from limited and noisy data by taking into account levels of uncertainty. I noticed that human probability estimates are accompanied by rational confidence levels denoting their precision; I thus propose here that the human sense of probability is Bayesian. This Bayesian nature constrains the estimation, neural representation and use of probabilities, which I aim to characterize by combining psychology, computational models and neuro-imaging.
I will characterize the Bayesian sense of probability computationally and psychologically. Human confidence as Bayesian precision will be my starting point, I will test other formalizations and look for the human algorithms that approximate Bayesian inference. I will test whether confidence depends on explicit reasoning (with implicit electrophysiological measures), develop ways of measuring its accuracy in a learning context, test whether it is trainable and domain-general.
I will then look for the neural codes of Bayesian probabilities, leveraging encoding models for functional magnetic resonance imaging (fMRI) and goal-driven artificial neural networks to propose new codes. I will ask whether the confidence information is embedded in the neural representation of the probability estimate itself, or separable.
Last, I will investigate a key function of confidence: the regulation of learning. I will test the implication of neuromodulators such as noradrenaline in this process, using both within and between-subject variability in the activity of key neuromodulatory nuclei (with advanced fMRI), the cortical release of noradrenaline during learning and its receptor density (with positron-emission tomography) and test for causality with pharmacological intervention.
Characterizing the sense of probability has broad implications: it should improve our understanding of the way we represent our world with probabilistic internal models, the way we learn and make decisions.
I will characterize the Bayesian sense of probability computationally and psychologically. Human confidence as Bayesian precision will be my starting point, I will test other formalizations and look for the human algorithms that approximate Bayesian inference. I will test whether confidence depends on explicit reasoning (with implicit electrophysiological measures), develop ways of measuring its accuracy in a learning context, test whether it is trainable and domain-general.
I will then look for the neural codes of Bayesian probabilities, leveraging encoding models for functional magnetic resonance imaging (fMRI) and goal-driven artificial neural networks to propose new codes. I will ask whether the confidence information is embedded in the neural representation of the probability estimate itself, or separable.
Last, I will investigate a key function of confidence: the regulation of learning. I will test the implication of neuromodulators such as noradrenaline in this process, using both within and between-subject variability in the activity of key neuromodulatory nuclei (with advanced fMRI), the cortical release of noradrenaline during learning and its receptor density (with positron-emission tomography) and test for causality with pharmacological intervention.
Characterizing the sense of probability has broad implications: it should improve our understanding of the way we represent our world with probabilistic internal models, the way we learn and make decisions.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/948105 |
Start date: | 01-02-2021 |
End date: | 31-01-2027 |
Total budget - Public funding: | 1 499 963,00 Euro - 1 499 963,00 Euro |
Cordis data
Original description
Bayesian inference optimally estimates probabilities from limited and noisy data by taking into account levels of uncertainty. I noticed that human probability estimates are accompanied by rational confidence levels denoting their precision; I thus propose here that the human sense of probability is Bayesian. This Bayesian nature constrains the estimation, neural representation and use of probabilities, which I aim to characterize by combining psychology, computational models and neuro-imaging.I will characterize the Bayesian sense of probability computationally and psychologically. Human confidence as Bayesian precision will be my starting point, I will test other formalizations and look for the human algorithms that approximate Bayesian inference. I will test whether confidence depends on explicit reasoning (with implicit electrophysiological measures), develop ways of measuring its accuracy in a learning context, test whether it is trainable and domain-general.
I will then look for the neural codes of Bayesian probabilities, leveraging encoding models for functional magnetic resonance imaging (fMRI) and goal-driven artificial neural networks to propose new codes. I will ask whether the confidence information is embedded in the neural representation of the probability estimate itself, or separable.
Last, I will investigate a key function of confidence: the regulation of learning. I will test the implication of neuromodulators such as noradrenaline in this process, using both within and between-subject variability in the activity of key neuromodulatory nuclei (with advanced fMRI), the cortical release of noradrenaline during learning and its receptor density (with positron-emission tomography) and test for causality with pharmacological intervention.
Characterizing the sense of probability has broad implications: it should improve our understanding of the way we represent our world with probabilistic internal models, the way we learn and make decisions.
Status
SIGNEDCall topic
ERC-2020-STGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)