NL4XAI | Interactive Natural Language Technology for Explainable Artificial Intelligence

Summary
According to Polanyi's paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision. NL4XAI will train 11 creative, entrepreneurial and innovative early-stage researchers (ESRs), who will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, ESRs are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents. It is worthy to note that we have selected some of the most prominent European researchers (from both academy and industry) in each of the related fundamental topics and created a joint high quality training program that can be seen as a pyramid with the main research objective (designing and building XAI models) on top. It will be achieved as result of jointly addressing the research objectives in the pyramid base (NL generation and processing for XAI; Argumentation Technology for XAI; Interactive Interfaces for XAI). ESRs are also to be trained in Ethical and Legal issues, as well as in transversal skills.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/860621
Start date: 01-10-2019
End date: 30-09-2024
Total budget - Public funding: 2 843 888,04 Euro - 2 843 888,00 Euro
Cordis data

Original description

According to Polanyi's paradox, humans know more than they can explain, mainly due to the huge amount of implicit knowledge they unconsciously acquire trough culture, heritage, etc. The same applies for Artificial Intelligence (AI) systems mainly learnt automatically from data. However, in accordance with EU laws, humans have a right to explanation of decisions affecting them, no matter who (or what AI system) makes such decision. NL4XAI will train 11 creative, entrepreneurial and innovative early-stage researchers (ESRs), who will face the challenge of making AI self-explanatory and thus contributing to translate knowledge into products and services for economic and social benefit, with the support of Explainable AI (XAI) systems. Moreover, the focus of NL4XAI is in the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. As a result, ESRs are expected to leverage the usage of AI models and techniques even by non-expert users. Namely, all their developments will be validated by humans in specific use cases, and main outcomes publicly reported and integrated into a common open source software framework for XAI that will be accessible to all the European citizens. In addition, those results to be exploited commercially will be protected through licenses or patents. It is worthy to note that we have selected some of the most prominent European researchers (from both academy and industry) in each of the related fundamental topics and created a joint high quality training program that can be seen as a pyramid with the main research objective (designing and building XAI models) on top. It will be achieved as result of jointly addressing the research objectives in the pyramid base (NL generation and processing for XAI; Argumentation Technology for XAI; Interactive Interfaces for XAI). ESRs are also to be trained in Ethical and Legal issues, as well as in transversal skills.

Status

SIGNED

Call topic

MSCA-ITN-2019

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.1. Fostering new skills by means of excellent initial training of researchers
H2020-MSCA-ITN-2019
MSCA-ITN-2019