VIGILIA | VIrtual GuardIan AngeLs for the post-truth Information Age

Summary
This project is motivated by the hypothesis that we are approaching a post-truth society, where it becomes practically impossible for anyone to distinguish fact from fiction in all but the most trivial questions. A main cause of this evolution is people’s innate dependence on cognitive heuristics to process information, which may lead to biases and poor decision making. These biases risk being amplified in a networked society, where people depend on others to form their opinions and judgments and to determine their actions. Contemporary generative Artificial Intelligence technologies may further exacerbate this risk, given their ability to fabricate and efficiently spread false but highly convincing information at an unprecedented scale. Combined with expected advances in virtual, augmented, mixed, and extended reality technologies, this may create an epistemic crisis with consequences that are hard to fully fathom: a post-truth era on steroids.

In the VIGILIA project, we will investigate a possible mitigation strategy. We propose to develop automated techniques for detecting triggers of cognitive biases and heuristics in humans when facing information, as well as their effect at an interpersonal level (their effect on trust, reputation, and information propagation), and at a societal level (in terms of possible irrational behaviour and polarization). We aim to achieve this by leveraging techniques from AI itself, in particular Large Language Models, as well as by building on advanced user modelling approaches from the past ERC Consolidator Grant FORSIED. Our results will be integrated within tools that we refer to as VIrtual GuardIan AngeLs (VIGILs), aimed at news and social media consumers, journalists, scientific researchers, and political decision makers. Ethical questions arising will be identified and dealt with as first-class questions within the research project.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101142229
Start date: 01-10-2024
End date: 30-09-2029
Total budget - Public funding: 2 490 000,00 Euro - 2 490 000,00 Euro
Cordis data

Original description

This project is motivated by the hypothesis that we are approaching a post-truth society, where it becomes practically impossible for anyone to distinguish fact from fiction in all but the most trivial questions. A main cause of this evolution is people’s innate dependence on cognitive heuristics to process information, which may lead to biases and poor decision making. These biases risk being amplified in a networked society, where people depend on others to form their opinions and judgments and to determine their actions. Contemporary generative Artificial Intelligence technologies may further exacerbate this risk, given their ability to fabricate and efficiently spread false but highly convincing information at an unprecedented scale. Combined with expected advances in virtual, augmented, mixed, and extended reality technologies, this may create an epistemic crisis with consequences that are hard to fully fathom: a post-truth era on steroids.

In the VIGILIA project, we will investigate a possible mitigation strategy. We propose to develop automated techniques for detecting triggers of cognitive biases and heuristics in humans when facing information, as well as their effect at an interpersonal level (their effect on trust, reputation, and information propagation), and at a societal level (in terms of possible irrational behaviour and polarization). We aim to achieve this by leveraging techniques from AI itself, in particular Large Language Models, as well as by building on advanced user modelling approaches from the past ERC Consolidator Grant FORSIED. Our results will be integrated within tools that we refer to as VIrtual GuardIan AngeLs (VIGILs), aimed at news and social media consumers, journalists, scientific researchers, and political decision makers. Ethical questions arising will be identified and dealt with as first-class questions within the research project.

Status

SIGNED

Call topic

ERC-2023-ADG

Update Date

26-11-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.1 Frontier science
ERC-2023-ADG ERC ADVANCED GRANTS