AutoFair | Human-Compatible Artificial Intelligence with Guarantees

Summary
In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101070568
Start date: 01-10-2022
End date: 30-09-2025
Total budget - Public funding: 3 350 720,00 Euro - 3 350 720,00 Euro
Cordis data

Original description

In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.

Status

SIGNED

Call topic

HORIZON-CL4-2021-HUMAN-01-01

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Artificial Intelligence, Data and Robotics Partnership (ADR)
ADR Partnership Call 2021
HORIZON-CL4-2021-HUMAN-01-01 Verifiable robustness, energy efficiency and transparency for Trustworthy AI: Scientific excellence boosting industrial competitiveness (AI, Data and Robotics Partnership) (RIA)
Horizon Europe
HORIZON.2 Global Challenges and European Industrial Competitiveness
HORIZON.2.4 Digital, Industry and Space
HORIZON.2.4.5 Artificial Intelligence and Robotics
HORIZON-CL4-2021-HUMAN-01
HORIZON-CL4-2021-HUMAN-01-01 Verifiable robustness, energy efficiency and transparency for Trustworthy AI: Scientific excellence boosting industrial competitiveness (AI, Data and Robotics Partnership) (RIA)