Summary
Online hate speech and disinformation have emerged as a major problem for democratic societies worldwide. Governments, companies and civil society groups have responded to this phenomenon by increasingly turning to Artificial Intelligence (AI) as a tool that can detect, decelerate and remove online extreme speech. However, such efforts confront many challenges. One of the key challenges is the quality, scope, and inclusivity of training data sets. The second challenge is the lack of procedural guidelines and frameworks that can bring cultural contextualization to these systems. Lack of cultural contextualization has resulted in false positives, over-application and systemic bias. The ongoing ERC project has identified the need for a global comparative framework in AI-assisted solutions in order to address cultural variation, since there is no catch-all algorithm that can work for different contexts. Following this, the proposed project will address major challenges facing AI assisted extreme speech moderation by developing an innovative solution of collaborative bottom-up coding. The model, “AI4Dignity”, moves beyond keyword-based detection systems by pioneering a community-based classification approach. It identifies fact-checkers as critical human interlocutors who can bring cultural contextualization to AI-assisted speech moderation in a meaningful and feasible manner. AI4Dignity will be a significant step towards setting procedural benchmarks to operationalize “the human in the loop” principle and bring inclusive training datasets for AI systems tackling urgent issues of digital hate and disinformation.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/957442 |
Start date: | 01-01-2021 |
End date: | 30-06-2022 |
Total budget - Public funding: | - 150 000,00 Euro |
Cordis data
Original description
Online hate speech and disinformation have emerged as a major problem for democratic societies worldwide. Governments, companies and civil society groups have responded to this phenomenon by increasingly turning to Artificial Intelligence (AI) as a tool that can detect, decelerate and remove online extreme speech. However, such efforts confront many challenges. One of the key challenges is the quality, scope, and inclusivity of training data sets. The second challenge is the lack of procedural guidelines and frameworks that can bring cultural contextualization to these systems. Lack of cultural contextualization has resulted in false positives, over-application and systemic bias. The ongoing ERC project has identified the need for a global comparative framework in AI-assisted solutions in order to address cultural variation, since there is no catch-all algorithm that can work for different contexts. Following this, the proposed project will address major challenges facing AI assisted extreme speech moderation by developing an innovative solution of collaborative bottom-up coding. The model, “AI4Dignity”, moves beyond keyword-based detection systems by pioneering a community-based classification approach. It identifies fact-checkers as critical human interlocutors who can bring cultural contextualization to AI-assisted speech moderation in a meaningful and feasible manner. AI4Dignity will be a significant step towards setting procedural benchmarks to operationalize “the human in the loop” principle and bring inclusive training datasets for AI systems tackling urgent issues of digital hate and disinformation.Status
CLOSEDCall topic
ERC-2020-POCUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)