SPATIAL | Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning

Summary
The SPATIAL (Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning) project seeks to address the challenges of black-box AI and data management in cybersecurity by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system framework that will serve as critical building blocks to achieve trustworthy AI in security solutions. The main objectives include: 1) To develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development; 2) To develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralized, uncontrolled environments; 3) To define effective and practical adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions; 4) To create an educational modules that provide technical skills, ethical and socio-legal awareness to current and future AI engineers/developers to ensure the accountable development of security solutions; 5) To develop a communication framework that enables accountable and transparent understanding of AI applications for users, software developers and security service providers. Besides technical measures, SPATIAL project aims to facilitate generating appropriate skills and education for AI security to strike a balance among technological complexity, societal complexity and value conflicts in AI deployment. The project covers data privacy, resilience engineering, and legal-ethical accountability that are in line with EU top agenda to achieve trustworthy AI. In addition, the work carried out in SPATIAL on both social and technical aspects will serve as a stepping stone to establish an appropriate governance and regulatory framework for AI-driven security in Europe.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101021808
Start date: 01-09-2021
End date: 31-08-2024
Total budget - Public funding: 4 961 976,00 Euro - 4 961 976,00 Euro
Cordis data

Original description

The SPATIAL (Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning) project seeks to address the challenges of black-box AI and data management in cybersecurity by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system framework that will serve as critical building blocks to achieve trustworthy AI in security solutions. The main objectives include: 1) To develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development; 2) To develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralized, uncontrolled environments; 3) To define effective and practical adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions; 4) To create an educational modules that provide technical skills, ethical and socio-legal awareness to current and future AI engineers/developers to ensure the accountable development of security solutions; 5) To develop a communication framework that enables accountable and transparent understanding of AI applications for users, software developers and security service providers. Besides technical measures, SPATIAL project aims to facilitate generating appropriate skills and education for AI security to strike a balance among technological complexity, societal complexity and value conflicts in AI deployment. The project covers data privacy, resilience engineering, and legal-ethical accountability that are in line with EU top agenda to achieve trustworthy AI. In addition, the work carried out in SPATIAL on both social and technical aspects will serve as a stepping stone to establish an appropriate governance and regulatory framework for AI-driven security in Europe.

Status

SIGNED

Call topic

SU-DS02-2020

Update Date

27-10-2022
Images
No images available.
Geographical location(s)