Summary
Today’s AI landscape is permeated by plentiful data and dominated by powerful methods with the potential to impact a wide range of human sectors, including healthcare and the practice of law. Yet, this potential is hindered by the opacity of most data-centric AI methods and it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust and doubts regarding its regulatory and ethical compliance. Extensive research efforts are currently being devoted towards explainable AI, but they are mostly focused on engineering shallow, static explanations providing little transparency on how the explained outputs are obtained and limited opportunities for human insight. ADIX aims to define a novel scientific paradigm of deep, interactive explanations that can be deployed alongside a variety of data-centric AI methods to explain their outputs by providing justifications in their support. These can be progressively questioned by humans and the outputs of the AI methods refined as a result of human feedback, within explanatory exchanges between humans and machines. This ambitious paradigm will be realised using computational argumentation as the underpinning, unifying theoretical foundation: I will define argumentative abstractions of the inner workings of a variety of data-centric AI methods from which various explanation types, providing argumentative grounds for outputs, can be drawn, generate explanatory exchanges between humans and machines from interaction patterns instantiated on the argumentative abstractions and explanation types, and develop argumentative wrappers from human feedback. The novel paradigm will be theoretically defined and informed and tested by experiments and empirical evaluation, and it will lead to a radical re-thinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101020934 |
Start date: | 01-10-2021 |
End date: | 30-09-2026 |
Total budget - Public funding: | 2 500 001,25 Euro - 2 500 000,00 Euro |
Cordis data
Original description
Today’s AI landscape is permeated by plentiful data and dominated by powerful methods with the potential to impact a wide range of human sectors, including healthcare and the practice of law. Yet, this potential is hindered by the opacity of most data-centric AI methods and it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust and doubts regarding its regulatory and ethical compliance. Extensive research efforts are currently being devoted towards explainable AI, but they are mostly focused on engineering shallow, static explanations providing little transparency on how the explained outputs are obtained and limited opportunities for human insight. ADIX aims to define a novel scientific paradigm of deep, interactive explanations that can be deployed alongside a variety of data-centric AI methods to explain their outputs by providing justifications in their support. These can be progressively questioned by humans and the outputs of the AI methods refined as a result of human feedback, within explanatory exchanges between humans and machines. This ambitious paradigm will be realised using computational argumentation as the underpinning, unifying theoretical foundation: I will define argumentative abstractions of the inner workings of a variety of data-centric AI methods from which various explanation types, providing argumentative grounds for outputs, can be drawn, generate explanatory exchanges between humans and machines from interaction patterns instantiated on the argumentative abstractions and explanation types, and develop argumentative wrappers from human feedback. The novel paradigm will be theoretically defined and informed and tested by experiments and empirical evaluation, and it will lead to a radical re-thinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society.Status
SIGNEDCall topic
ERC-2020-ADGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)