Summary
In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101070568 |
Start date: | 01-10-2022 |
End date: | 30-09-2025 |
Total budget - Public funding: | 3 350 720,00 Euro - 3 350 720,00 Euro |
Cordis data
Original description
In this proposal, we address the matter of transparency and explainability of AI using approaches inspired by control theory. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Furthermore, we present an extensive programme in explainability of fairness-related qualities. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. Three use cases (in Human Resources automation, Financial Technology, and Advertising) will be used to assess the effectiveness of our approaches.Status
SIGNEDCall topic
HORIZON-CL4-2021-HUMAN-01-01Update Date
09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all