AIAS | AI-ASsisted cybersecurity platform empowering SMEs to defend against adversarial AI attacks

Summary
In recent years, the digital environment and digital transformation of enterprises of all sizes have made AI-based solutions vital to mission-critical. AI-based systems are used in every technical field, including smart cities, self-driving cars, autonomous ships, 5G/6G, and next-generation intrusion detection systems. The industry's significant exploitation of AI systems exposes early adopters to undiscovered vulnerabilities such as data corruption, model theft, and adversarial samples because of their lack of tactical and strategic capabilities to defend, identify, and respond to attacks on their AI-based systems. Adversaries have created a new attack surface to exploit AI-system vulnerabilities, targeting Machine Learning (ML) and Deep Learning (DL) systems to impair their functionality and performance. Adversarial AI is a new threat that might have serious effects in crucial areas like finance and healthcare, where AI is widely used. AIAS project aims to perform in-depth research on adversarial AI to design and develop an innovative AI-based security platform for the protection of AI systems and AI-based operations of organisations, relying on Adversarial AI defence methods (e.g., adversarial training, adversarial AI attack detection), deception mechanisms (e.g., high-interaction honeypots, digital twins, virtual personas) as well as on explainable AI solutions (XAI) that empower security teams to materialise the concept of “AI for Cybersecurity” (i.e., AI/ML-based tools to enhance the detection performance, defence and respond to attacks) and “Cybersecurity for AI” (i.e., protection of AI systems against adversarial AI attacks).
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101131292
Start date: 01-01-2024
End date: 31-12-2027
Total budget - Public funding: - 1 564 000,00 Euro
Cordis data

Original description

In recent years, the digital environment and digital transformation of enterprises of all sizes have made AI-based solutions vital to mission-critical. AI-based systems are used in every technical field, including smart cities, self-driving cars, autonomous ships, 5G/6G, and next-generation intrusion detection systems. The industry's significant exploitation of AI systems exposes early adopters to undiscovered vulnerabilities such as data corruption, model theft, and adversarial samples because of their lack of tactical and strategic capabilities to defend, identify, and respond to attacks on their AI-based systems. Adversaries have created a new attack surface to exploit AI-system vulnerabilities, targeting Machine Learning (ML) and Deep Learning (DL) systems to impair their functionality and performance. Adversarial AI is a new threat that might have serious effects in crucial areas like finance and healthcare, where AI is widely used. AIAS project aims to perform in-depth research on adversarial AI to design and develop an innovative AI-based security platform for the protection of AI systems and AI-based operations of organisations, relying on Adversarial AI defence methods (e.g., adversarial training, adversarial AI attack detection), deception mechanisms (e.g., high-interaction honeypots, digital twins, virtual personas) as well as on explainable AI solutions (XAI) that empower security teams to materialise the concept of “AI for Cybersecurity” (i.e., AI/ML-based tools to enhance the detection performance, defence and respond to attacks) and “Cybersecurity for AI” (i.e., protection of AI systems against adversarial AI attacks).

Status

SIGNED

Call topic

HORIZON-MSCA-2022-SE-01-01

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2022-SE-01
HORIZON-MSCA-2022-SE-01-01 MSCA Staff Exchanges 2022