RRR-XAI | RRR-XAI: Right for the Right Reason eXplainable Artificial Intelligence

Summary
Deep Deep Learning (DL) is a form of machine learning (ML) that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. This hierarchy allows a DL model to learn complicated concepts by building them out of simpler ones. A graph of these hierarchies would be many layers deep, and thus its name. A Deep neural network (DNN) is based on an artificial neural network model, and its core strength is that there is no need for human assistance to formally specify all the knowledge that the model needs. This makes DNNs represent the state of the art in Artificial Intelligence (AI). Despite their top performance and ubiquity of applications (from Healthcare to autonomous cars), DNNs suffer serious shortcomings. First, DNNs are considered black box models, i.e., with complex and opaque algorithms, hard to interpret and diagnose. Second, they suffer from bias, and the testing protocols for automatic recognition are not fair, as they learn patterns in the data that are not correlated to the output; e.g. they may focus on areas outside the lung in X ray images to predict the presence of COVID-19. Although DNNs outperform many other methods, they often are not right for the right reasons (RRR). RRR-XAI tackles this mismatch and bridges this gap through a tight integration of DL and symbolic AI, with the principal objective of making DL explainable. To achieve this I will follow the rationale behind XAI under the RRR philosophy and perform analyses to understand two types of phenomena that cause trouble in DNNs. Second, I will use: 1) Domain knowledge expertise as supporting evidence to explain a particular model output; 2) Neural-Symbolic computation to communicate the explanation of such phenomena in natural language. I will study two practical use cases where supporting explanations of the model output are critical: a) COVID-19 prediction from chest X-Ray images, and b) Weapon detection in alarm systems and crowds from images.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101059332
Start date: 01-11-2022
End date: 31-10-2024
Total budget - Public funding: - 165 312,00 Euro
Cordis data

Original description

Deep Learning (DL) is a form of machine learning (ML) that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. This hierarchy allows a DL model to learn complicated concepts by building them out of simpler ones. A graph of these hierarchies would be many layers deep, and thus its name. A Deep neural network (DNN) is based on an artificial neural network model, and its core strength is that there is no need for human assistance to formally specify all the knowledge that the model needs. This makes DNNs represent the state of the art in Artificial Intelligence (AI). Despite their top performance and ubiquity of applications (from Healthcare to autonomous cars), DNNs suffer serious shortcomings. First, DNNs are considered black box models, i.e., with complex and opaque algorithms, hard to interpret and diagnose. Second, they suffer from bias, and the testing protocols for automatic recognition are not fair, as they learn patterns in the data that are not correlated to the output; e.g. they may focus on areas outside the lung in X ray images to predict the presence of COVID-19. Although DNNs outperform many other methods, they often are not right for the right reasons (RRR). RRR-XAI tackles this mismatch and bridges this gap through a tight integration of DL and symbolic AI, with the principal objective of making DL explainable. To achieve this I will follow the rationale behind XAI under the RRR philosophy and perform analyses to understand two types of phenomena that cause trouble in DNNs. Second, I will use: 1) Domain knowledge expertise as supporting evidence to explain a particular model output; 2) Neural-Symbolic computation to communicate the explanation of such phenomena in natural language. I will study two practical use cases where supporting explanations of the model output are critical: a) COVID-19 prediction from chest X-Ray images, and b) Weapon detection in alarm systems and crowds from images.

Status

TERMINATED

Call topic

HORIZON-MSCA-2021-PF-01-01

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2021-PF-01
HORIZON-MSCA-2021-PF-01-01 MSCA Postdoctoral Fellowships 2021