AI-CODE | AI-CODE - AI services for COntinuous trust in emerging Digital Environments

Summary
The media sector is exposed to and undergoing continuous innovations that occur at a pace never seen before and have a non-negligible impact on citizens, democracy and a society as whole. A significant booster becomes a generative Artificial Intelligence, which already plays and will continue to play a critical role (in a positive as well as negative meaning) in creating and spreading information. Especially in next-generation social media, which refer to the anticipated evolution towards more AI-based decentralised and immersive virtual environments (like fediverses and metaverses), generative AI can become the most prominent enabler of disinformation growth accompanied by a lack of trusted information. Media professionals are not, however, currently well-equipped with supporting tools nor knowledge to operate in such already emerging environments.

As a result, there is a tremendous need for innovative (AI-based) solutions ensuring media freedom and pluralism, delivering credible and truthful information as well as combating highly disinformative content. The main goal of the AI-CODE project is to evolve state-of-the-art research results (tools, technologies, and know-how) from the past and ongoing EU-funded research projects focused on disinformation to a novel ecosystem of services that will proactively support media professionals in trusted information production through AI. First, the project aims to identify, analyse, and understand future developments of next-generation social media in the context of rapid development of generative Artificial Intelligence and how such a combination can impact the (dis)information space. Second, the project aims to provide media professionals with novel AI-based services to coach them how to work in emerging digital environments and how to utilise generative AI effectively and credibly, to detect new forms of content manipulation, as well as to assess the reputation and credibility of sources and their content.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101135437
Start date: 01-12-2023
End date: 30-11-2026
Total budget - Public funding: - 4 969 471,00 Euro
Cordis data

Original description

The media sector is exposed to and undergoing continuous innovations that occur at a pace never seen before and have a non-negligible impact on citizens, democracy and a society as whole. A significant booster becomes a generative Artificial Intelligence, which already plays and will continue to play a critical role (in a positive as well as negative meaning) in creating and spreading information. Especially in next-generation social media, which refer to the anticipated evolution towards more AI-based decentralised and immersive virtual environments (like fediverses and metaverses), generative AI can become the most prominent enabler of disinformation growth accompanied by a lack of trusted information. Media professionals are not, however, currently well-equipped with supporting tools nor knowledge to operate in such already emerging environments.

As a result, there is a tremendous need for innovative (AI-based) solutions ensuring media freedom and pluralism, delivering credible and truthful information as well as combating highly disinformative content. The main goal of the AI-CODE project is to evolve state-of-the-art research results (tools, technologies, and know-how) from the past and ongoing EU-funded research projects focused on disinformation to a novel ecosystem of services that will proactively support media professionals in trusted information production through AI. First, the project aims to identify, analyse, and understand future developments of next-generation social media in the context of rapid development of generative Artificial Intelligence and how such a combination can impact the (dis)information space. Second, the project aims to provide media professionals with novel AI-based services to coach them how to work in emerging digital environments and how to utilise generative AI effectively and credibly, to detect new forms of content manipulation, as well as to assess the reputation and credibility of sources and their content.

Status

SIGNED

Call topic

HORIZON-CL4-2023-HUMAN-01-05

Update Date

12-03-2024
Images
No images available.
Geographical location(s)