ERINIA | Evaluating the Robustness of Non-Credible Text Identification by Anticipating Adversarial Actions

Summary
As challenges posed by misinformation become apparent in the modern digital society, state-of-the-art methods of Artificial Intelligence, especially Natural Language Processing (NLP) and Machine Learning, are considered as countermeasures. Indeed, previous research has shown that NLP solutions can detect phenomena such as fake news, social media bots or usage of propaganda techniques. However, little attention has been given to the robustness of these approaches, which is especially important in the case of deliberate misinformation, whose authors would likely attempt to deceive any automatic filtering algorithm to achieve their goals.

The goal of the ERINIA project is to explore the robustness of text classifiers in this application area by investigating methods for detecting adversarial examples. Such methods aim to perform small perturbations to a given text piece, so that its meaning is preserved, but the output of the investigated classifier is reversed. To that end, previously unexplored directions will be pursued, including training reinforcement learning solutions and leveraging research on simplification and style transfer. Finally, the developed tools will be used to check the robustness of the current state-of-the-art misinformation detection solutions.

The project includes a range of training activities for the researcher and a plan for dissemination of the obtained results to various research communities. It also takes into account the society at large, as the project outcomes can inform further discussion on whether automatic content filtering is a viable solution to the misinformation problem.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101060930
Start date: 01-11-2022
End date: 31-10-2024
Total budget - Public funding: - 165 312,00 Euro
Cordis data

Original description

As challenges posed by misinformation become apparent in the modern digital society, state-of-the-art methods of Artificial Intelligence, especially Natural Language Processing (NLP) and Machine Learning, are considered as countermeasures. Indeed, previous research has shown that NLP solutions can detect phenomena such as fake news, social media bots or usage of propaganda techniques. However, little attention has been given to the robustness of these approaches, which is especially important in the case of deliberate misinformation, whose authors would likely attempt to deceive any automatic filtering algorithm to achieve their goals.

The goal of the ERINIA project is to explore the robustness of text classifiers in this application area by investigating methods for detecting adversarial examples. Such methods aim to perform small perturbations to a given text piece, so that its meaning is preserved, but the output of the investigated classifier is reversed. To that end, previously unexplored directions will be pursued, including training reinforcement learning solutions and leveraging research on simplification and style transfer. Finally, the developed tools will be used to check the robustness of the current state-of-the-art misinformation detection solutions.

The project includes a range of training activities for the researcher and a plan for dissemination of the obtained results to various research communities. It also takes into account the society at large, as the project outcomes can inform further discussion on whether automatic content filtering is a viable solution to the misinformation problem.

Status

SIGNED

Call topic

HORIZON-MSCA-2021-PF-01-01

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2021-PF-01
HORIZON-MSCA-2021-PF-01-01 MSCA Postdoctoral Fellowships 2021