FAITH | Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains

Summary
The increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent socio-economic needs. Despite the numerous recommendations and standards, most AI practitioners and decision makers, still prioritize system performance as the main metric in their workflows often neglecting to verify and quantify core attributes of trustworthiness including traceability, robustness, security, transparency and usability. In addition, trustworthiness is not assessed throughout the lifecycle of AI system development so developers often fail to grasp a holistic view across different AI risks. Last, the lack of a unified, multi-disciplinary AI, Data and Robotics ecosystem for assessing trustworthiness across several critical AI application domains hampers the definition and implementation of a robust AI paradigm shift framework towards increased trustworthiness and accelerated AI adoption.
To address this critical unmet needs, FAITH innovation action will develop and validate a human-centric, trustworthiness optimization ecosystem, which enables measuring, optimizing and counteracting the risks associated with AI adoption and trustworthiness in critical domains, namely robotics, education, media, transport, healthcare, active ageing, and industrial processes through seven international Large Scale Pilots. Notably, cross-fertilization actions will create a joint outcome, which will bring together the visions and specificities of all the pilots. To this end, the project will adopt a dynamic risk management approach following EU legislative instruments and ENISA guidelines and deliver tools to be widely used across different countries and settings while diverse stakeholders’ communities will be engaged in the each pilot delivering seven sector-specific reports on trustworthiness to accelerate AI take-up.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101135932
Start date: 01-01-2024
End date: 31-12-2027
Total budget - Public funding: 8 629 150,00 Euro - 7 455 585,00 Euro
Cordis data

Original description

The increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent socio-economic needs. Despite the numerous recommendations and standards, most AI practitioners and decision makers, still prioritize system performance as the main metric in their workflows often neglecting to verify and quantify core attributes of trustworthiness including traceability, robustness, security, transparency and usability. In addition, trustworthiness is not assessed throughout the lifecycle of AI system development so developers often fail to grasp a holistic view across different AI risks. Last, the lack of a unified, multi-disciplinary AI, Data and Robotics ecosystem for assessing trustworthiness across several critical AI application domains hampers the definition and implementation of a robust AI paradigm shift framework towards increased trustworthiness and accelerated AI adoption.
To address this critical unmet needs, FAITH innovation action will develop and validate a human-centric, trustworthiness optimization ecosystem, which enables measuring, optimizing and counteracting the risks associated with AI adoption and trustworthiness in critical domains, namely robotics, education, media, transport, healthcare, active ageing, and industrial processes through seven international Large Scale Pilots. Notably, cross-fertilization actions will create a joint outcome, which will bring together the visions and specificities of all the pilots. To this end, the project will adopt a dynamic risk management approach following EU legislative instruments and ENISA guidelines and deliver tools to be widely used across different countries and settings while diverse stakeholders’ communities will be engaged in the each pilot delivering seven sector-specific reports on trustworthiness to accelerate AI take-up.

Status

SIGNED

Call topic

HORIZON-CL4-2023-HUMAN-01-02

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.2 Global Challenges and European Industrial Competitiveness
HORIZON.2.4 Digital, Industry and Space
HORIZON.2.4.0 Cross-cutting call topics
HORIZON-CL4-2023-HUMAN-01-CNECT
HORIZON-CL4-2023-HUMAN-01-02 Large Scale pilots on trustworthy AI data and robotics addressing key societal challenges (AI Data and Robotics Partnership) (IA)
HORIZON.2.4.5 Artificial Intelligence and Robotics
HORIZON-CL4-2023-HUMAN-01-CNECT
HORIZON-CL4-2023-HUMAN-01-02 Large Scale pilots on trustworthy AI data and robotics addressing key societal challenges (AI Data and Robotics Partnership) (IA)