Summary
Aiba creates safe digital lives for kids and teenagers. We do real-time detection of fake profiles and unwanted behavior, such as cyber grooming, toxicity, harassment and bullying in games and social media. Our machine learning powered moderation service will make moderation easier for our customers. We risk score all ongoing conversations real-time on gaming and social media platforms, and help our customers to prevent unwanted behavior before it makes any harm. We help them stop cyber grooming early and our initial results show we can detect grooming coversations in less than 20 messages .Our service is powering our customers with increased community health and more efficient moderation through proactive and continuous risk scoring. They can save huge amounts of human hours by using our solution. Our unique SaaS multimodal approach, combined with machine learning makes for a new era in chatroom moderation.
With a novel approach based on award-winning research, researchers and data scientists at Aiba and NTNU, have developed and trained machine learning models with text and keystroke dynamics to detect cyber grooming and other unwanted behavior early so they can stop conversations to minimize harm and toxicity on their platforms. The solution has been trained and tested on short texts typically sent in a chat. From this minimal amount of information, the system has accurately been able to profile the chatter and determine both their age group and gender. The solution has been further developed to perform a continuous real-time analysis of the text messages, to mark suspicious conversations. To do this, natural language processing features (turning text into numerical feature vectors) are used, in combination with machine learning techniques.
With a novel approach based on award-winning research, researchers and data scientists at Aiba and NTNU, have developed and trained machine learning models with text and keystroke dynamics to detect cyber grooming and other unwanted behavior early so they can stop conversations to minimize harm and toxicity on their platforms. The solution has been trained and tested on short texts typically sent in a chat. From this minimal amount of information, the system has accurately been able to profile the chatter and determine both their age group and gender. The solution has been further developed to perform a continuous real-time analysis of the text messages, to mark suspicious conversations. To do this, natural language processing features (turning text into numerical feature vectors) are used, in combination with machine learning techniques.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101114178 |
Start date: | 01-06-2023 |
End date: | 29-02-2024 |
Total budget - Public funding: | - 75 000,00 Euro |
Cordis data
Original description
Aiba creates safe digital lives for kids and teenagers. We do real-time detection of fake profiles and unwanted behavior, such as cyber grooming, toxicity, harassment and bullying in games and social media. Our machine learning powered moderation service will make moderation easier for our customers. We risk score all ongoing conversations real-time on gaming and social media platforms, and help our customers to prevent unwanted behavior before it makes any harm. We help them stop cyber grooming early and our initial results show we can detect grooming coversations in less than 20 messages .Our service is powering our customers with increased community health and more efficient moderation through proactive and continuous risk scoring. They can save huge amounts of human hours by using our solution. Our unique SaaS multimodal approach, combined with machine learning makes for a new era in chatroom moderation.With a novel approach based on award-winning research, researchers and data scientists at Aiba and NTNU, have developed and trained machine learning models with text and keystroke dynamics to detect cyber grooming and other unwanted behavior early so they can stop conversations to minimize harm and toxicity on their platforms. The solution has been trained and tested on short texts typically sent in a chat. From this minimal amount of information, the system has accurately been able to profile the chatter and determine both their age group and gender. The solution has been further developed to perform a continuous real-time analysis of the text messages, to mark suspicious conversations. To do this, natural language processing features (turning text into numerical feature vectors) are used, in combination with machine learning techniques.
Status
SIGNEDCall topic
HORIZON-EIE-2022-SCALEUP-02-02Update Date
31-07-2023
Images
No images available.
Geographical location(s)