Summary
"Hate speech is a worldwide phenomenon that is increasingly pervading online spaces, creating an unsafe environment for users. While tech companies address this problem by server-side filtering using machine learning models trained on large datasets, these automatic methods cannot be applied to most languages due to lack of available training data.
Based on recent results of the PI's ERC project on multilingual representation models in low-resource settings, Respond2Hate aims at developing a pilot browser extension that allows users to locally remove hateful content from their social media feeds themselves, without having to rely on the support of tech companies.
Since hate speech is highly dependent on cultural context, responsive classifiers are needed that adapt to the individual environment. Commercial efforts focus on large-scale, general-purpose models which are often burdened with representation and bias problems, and therefore cope poorly with swiftly changing targets or information shift between regional contexts. In contrast, we seek to develop lightweight, adaptive models that require only a small dataset for initial fine-tuning by continuously enhancing model capabilities over time.
This is achieved by applying state-of-the-art Natural Language Processing (NLP) and deep learning techniques for pre-trained language models like low-resource transfer of hate speech representations from high-resource languages and few-shot learning based on limited user feedback. We have already successfully applied these methods in low-resource multilingual settings, and will now validate their use for hate speech filtering.
By making hate speech detection and reduction available in ""low-resource"" countries with little representation in current training datasets, which are currently not served well by governments, industry and NGOs, Respond2Hate will empower users to self-control their exposure to hate speech, fostering a healthier and safer online environment."
Based on recent results of the PI's ERC project on multilingual representation models in low-resource settings, Respond2Hate aims at developing a pilot browser extension that allows users to locally remove hateful content from their social media feeds themselves, without having to rely on the support of tech companies.
Since hate speech is highly dependent on cultural context, responsive classifiers are needed that adapt to the individual environment. Commercial efforts focus on large-scale, general-purpose models which are often burdened with representation and bias problems, and therefore cope poorly with swiftly changing targets or information shift between regional contexts. In contrast, we seek to develop lightweight, adaptive models that require only a small dataset for initial fine-tuning by continuously enhancing model capabilities over time.
This is achieved by applying state-of-the-art Natural Language Processing (NLP) and deep learning techniques for pre-trained language models like low-resource transfer of hate speech representations from high-resource languages and few-shot learning based on limited user feedback. We have already successfully applied these methods in low-resource multilingual settings, and will now validate their use for hate speech filtering.
By making hate speech detection and reduction available in ""low-resource"" countries with little representation in current training datasets, which are currently not served well by governments, industry and NGOs, Respond2Hate will empower users to self-control their exposure to hate speech, fostering a healthier and safer online environment."
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101100870 |
Start date: | 01-11-2023 |
End date: | 30-04-2025 |
Total budget - Public funding: | - 150 000,00 Euro |
Cordis data
Original description
"Hate speech is a worldwide phenomenon that is increasingly pervading online spaces, creating an unsafe environment for users. While tech companies address this problem by server-side filtering using machine learning models trained on large datasets, these automatic methods cannot be applied to most languages due to lack of available training data.Based on recent results of the PI's ERC project on multilingual representation models in low-resource settings, Respond2Hate aims at developing a pilot browser extension that allows users to locally remove hateful content from their social media feeds themselves, without having to rely on the support of tech companies.
Since hate speech is highly dependent on cultural context, responsive classifiers are needed that adapt to the individual environment. Commercial efforts focus on large-scale, general-purpose models which are often burdened with representation and bias problems, and therefore cope poorly with swiftly changing targets or information shift between regional contexts. In contrast, we seek to develop lightweight, adaptive models that require only a small dataset for initial fine-tuning by continuously enhancing model capabilities over time.
This is achieved by applying state-of-the-art Natural Language Processing (NLP) and deep learning techniques for pre-trained language models like low-resource transfer of hate speech representations from high-resource languages and few-shot learning based on limited user feedback. We have already successfully applied these methods in low-resource multilingual settings, and will now validate their use for hate speech filtering.
By making hate speech detection and reduction available in ""low-resource"" countries with little representation in current training datasets, which are currently not served well by governments, industry and NGOs, Respond2Hate will empower users to self-control their exposure to hate speech, fostering a healthier and safer online environment."
Status
SIGNEDCall topic
ERC-2022-POC2Update Date
31-07-2023
Images
No images available.
Geographical location(s)
Structured mapping