SAFEXPLAIN | SAFE AND EXPLAINABLE CRITICAL EMBEDDED SYSTEMS BASED ON AI

Summary
Deep Learning (DL) techniques are key for most future advanced software functions in Critical Autonomous AI-based Systems (CAIS) in cars, trains and satellites. Hence, those CAIS industries depend on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost.
There is a fundamental gap between Functional Safety (FUSA) requirements of CAIS and the nature of DL solutions needed to satisfy those requirements. The lack of transparency (mainly explainability and traceability), and the data-dependent and stochastic nature of DL software clash against the need for deterministic, verifiable and pass/fail test-based software solutions for CAIS.
SAFEXPLAIN tackles this challenge by providing a novel and flexible approach to allow the certification – hence adoption – of DL-based solutions in CAIS by (1) architecting transparent DL solutions that allow explaining why they satisfy FUSA requirements, with end-to-end traceability, with specific approaches to explain whether predictions can be trusted, and with strategies to reach (and prove) correct operation, in accordance with certification standards. SAFEXPLAIN will also (2) devise alternative and increasingly complex FUSA design safety patterns for different DL usage levels (i.e. with varying safety requirements) that will allow using DL in any CAIS functionality, for varying levels of criticality and fault tolerance.
SAFEXPLAIN brings together a highly skilled and complementary consortium to successfully tackle this endeavor including 3 research centers, RISE (AI expertise), IKR (FUSA expertise), and BSC (platform expertise); and 3 CAIS case studies, automotive (NAV), space (AIKO), and railway (IKR). SAFEXPLAIN DL-based solutions are assessed in an industrial toolset (EXI). Finally, to prove that transparency levels are fully compliant with FUSA, solutions are reviewed by internal certification experts (EXI), and external ones subcontracted for an independent assessment.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101069595
Start date: 01-10-2022
End date: 30-09-2025
Total budget - Public funding: 3 891 875,00 Euro - 3 891 875,00 Euro
View on other portals
Cordis data

Original description

Deep Learning (DL) techniques are key for most future advanced software functions in Critical Autonomous AI-based Systems (CAIS) in cars, trains and satellites. Hence, those CAIS industries depend on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost.
There is a fundamental gap between Functional Safety (FUSA) requirements of CAIS and the nature of DL solutions needed to satisfy those requirements. The lack of transparency (mainly explainability and traceability), and the data-dependent and stochastic nature of DL software clash against the need for deterministic, verifiable and pass/fail test-based software solutions for CAIS.
SAFEXPLAIN tackles this challenge by providing a novel and flexible approach to allow the certification – hence adoption – of DL-based solutions in CAIS by (1) architecting transparent DL solutions that allow explaining why they satisfy FUSA requirements, with end-to-end traceability, with specific approaches to explain whether predictions can be trusted, and with strategies to reach (and prove) correct operation, in accordance with certification standards. SAFEXPLAIN will also (2) devise alternative and increasingly complex FUSA design safety patterns for different DL usage levels (i.e. with varying safety requirements) that will allow using DL in any CAIS functionality, for varying levels of criticality and fault tolerance.
SAFEXPLAIN brings together a highly skilled and complementary consortium to successfully tackle this endeavor including 3 research centers, RISE (AI expertise), IKR (FUSA expertise), and BSC (platform expertise); and 3 CAIS case studies, automotive (NAV), space (AIKO), and railway (IKR). SAFEXPLAIN DL-based solutions are assessed in an industrial toolset (EXI). Finally, to prove that transparency levels are fully compliant with FUSA, solutions are reviewed by internal certification experts (EXI), and external ones subcontracted for an independent assessment.

Status

SIGNED

Call topic

HORIZON-CL4-2021-HUMAN-01-01

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Artificial Intelligence, Data and Robotics Partnership (ADR)
ADR Partnership Call 2021
HORIZON-CL4-2021-HUMAN-01-01 Verifiable robustness, energy efficiency and transparency for Trustworthy AI: Scientific excellence boosting industrial competitiveness (AI, Data and Robotics Partnership) (RIA)
Horizon Europe
HORIZON.2 Global Challenges and European Industrial Competitiveness
HORIZON.2.4 Digital, Industry and Space
HORIZON.2.4.5 Artificial Intelligence and Robotics
HORIZON-CL4-2021-HUMAN-01
HORIZON-CL4-2021-HUMAN-01-01 Verifiable robustness, energy efficiency and transparency for Trustworthy AI: Scientific excellence boosting industrial competitiveness (AI, Data and Robotics Partnership) (RIA)