Summary
Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Recent advances in AI have enabled the creation of highly realistic synthetic content and its artificial amplification through AI-powered bot networks. Consequently, it is extremely challenging for researchers and media professionals to assess the veracity/credibility of online content and to uncover the highly complex disinformation campaigns.
vera.ai seeks to build professional trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals & researchers and to also set the foundation for future research in the area of AI against disinformation.
Key novel characteristics of the AI models will be fairness, transparency (incl. explainability), robustness against concept drifts, continuous adaptation to disinformation evolution through a fact-checker-in-the-loop approach, and ability to handle multimodal and multilingual content. Recognising the perils of AI generated content, we will develop tools for deepfake detection in all formats (audio, video, image, text).
vera.ai adopts a multidisciplinary co-creation approach to AI technology design, coupled with open source algorithms. A unique key proposition is grounding of the AI models on continuously collected fact-checking data gathered from the tens of thousands of instances of “real life” content being verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. Social media and web content will be analysed and contextualised to expose disinformation campaigns and measure their impact.
Results will be validated by professional journalists and fact checkers from project partners (DW, AFP, EUDL, EBU), external participants (through our affiliation with EDMO and seven EDMO Hubs), the community of more than 53,000 users of the InVID-WeVerify verification plugin, and by media literacy, human rights and emergency response organisations.
vera.ai seeks to build professional trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals & researchers and to also set the foundation for future research in the area of AI against disinformation.
Key novel characteristics of the AI models will be fairness, transparency (incl. explainability), robustness against concept drifts, continuous adaptation to disinformation evolution through a fact-checker-in-the-loop approach, and ability to handle multimodal and multilingual content. Recognising the perils of AI generated content, we will develop tools for deepfake detection in all formats (audio, video, image, text).
vera.ai adopts a multidisciplinary co-creation approach to AI technology design, coupled with open source algorithms. A unique key proposition is grounding of the AI models on continuously collected fact-checking data gathered from the tens of thousands of instances of “real life” content being verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. Social media and web content will be analysed and contextualised to expose disinformation campaigns and measure their impact.
Results will be validated by professional journalists and fact checkers from project partners (DW, AFP, EUDL, EBU), external participants (through our affiliation with EDMO and seven EDMO Hubs), the community of more than 53,000 users of the InVID-WeVerify verification plugin, and by media literacy, human rights and emergency response organisations.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101070093 |
Start date: | 15-09-2022 |
End date: | 14-09-2025 |
Total budget - Public funding: | 5 691 875,00 Euro - 5 691 875,00 Euro |
Cordis data
Original description
Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Recent advances in AI have enabled the creation of highly realistic synthetic content and its artificial amplification through AI-powered bot networks. Consequently, it is extremely challenging for researchers and media professionals to assess the veracity/credibility of online content and to uncover the highly complex disinformation campaigns.vera.ai seeks to build professional trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals & researchers and to also set the foundation for future research in the area of AI against disinformation.
Key novel characteristics of the AI models will be fairness, transparency (incl. explainability), robustness against concept drifts, continuous adaptation to disinformation evolution through a fact-checker-in-the-loop approach, and ability to handle multimodal and multilingual content. Recognising the perils of AI generated content, we will develop tools for deepfake detection in all formats (audio, video, image, text).
vera.ai adopts a multidisciplinary co-creation approach to AI technology design, coupled with open source algorithms. A unique key proposition is grounding of the AI models on continuously collected fact-checking data gathered from the tens of thousands of instances of “real life” content being verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. Social media and web content will be analysed and contextualised to expose disinformation campaigns and measure their impact.
Results will be validated by professional journalists and fact checkers from project partners (DW, AFP, EUDL, EBU), external participants (through our affiliation with EDMO and seven EDMO Hubs), the community of more than 53,000 users of the InVID-WeVerify verification plugin, and by media literacy, human rights and emergency response organisations.
Status
SIGNEDCall topic
HORIZON-CL4-2021-HUMAN-01-27Update Date
09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all