DEXIM | Deeply Explainable Intelligent Machines

Summary
Explanations are valuable because they scaffold the kind of learning that supports adaptive behaviour, e.g. explanations enable users to adapt themselves to the situations that are about to arise. Explanations allow us to attain a stable environment and have the possibility to control it, e.g. explanations put us in a better position to control the future. Explanations in the medical domain can help patients identify and monitor the abnormal behaviour of their ailment. In the domain of self-driving vehicles they can warn the user of some critical state and collaborate with her to prevent a wrong decision. In the domain of satellite imagery, an explanatory monitoring system justifying the evidence of a future hurricane can save millions of lives. Hence, a learning machine that a user can trust and easily operate need to be fashioned with the ability of explanation. Moreover, according to GDPR, an automatic decision maker is required to be transparent by law.

As decision makers, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions. In contrast, artificially intelligent systems are frequently seen as opaque and are unable to explain their decisions. This is particularly concerning as ultimately such systems fail in building trust with human users.

In this proposal, the goal is to build a fully transparent end-to-end trainable and explainable deep learning approach for visual scene understanding. To achieve this goal, we will make use of the positive interactions between multiple data modalities, incorporate uncertainty and temporal continuity constraints, as well as memory mechanisms. The output of this proposal will have direct consequences for many practical applications, most notably in mobile robotics and intelligent vehicles industry. This project will therefore strengthen the user’s trust in a very competitive market.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/853489
Start date: 01-12-2019
End date: 31-12-2025
Total budget - Public funding: 1 500 000,00 Euro - 1 500 000,00 Euro
Cordis data

Original description

Explanations are valuable because they scaffold the kind of learning that supports adaptive behaviour, e.g. explanations enable users to adapt themselves to the situations that are about to arise. Explanations allow us to attain a stable environment and have the possibility to control it, e.g. explanations put us in a better position to control the future. Explanations in the medical domain can help patients identify and monitor the abnormal behaviour of their ailment. In the domain of self-driving vehicles they can warn the user of some critical state and collaborate with her to prevent a wrong decision. In the domain of satellite imagery, an explanatory monitoring system justifying the evidence of a future hurricane can save millions of lives. Hence, a learning machine that a user can trust and easily operate need to be fashioned with the ability of explanation. Moreover, according to GDPR, an automatic decision maker is required to be transparent by law.

As decision makers, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions. In contrast, artificially intelligent systems are frequently seen as opaque and are unable to explain their decisions. This is particularly concerning as ultimately such systems fail in building trust with human users.

In this proposal, the goal is to build a fully transparent end-to-end trainable and explainable deep learning approach for visual scene understanding. To achieve this goal, we will make use of the positive interactions between multiple data modalities, incorporate uncertainty and temporal continuity constraints, as well as memory mechanisms. The output of this proposal will have direct consequences for many practical applications, most notably in mobile robotics and intelligent vehicles industry. This project will therefore strengthen the user’s trust in a very competitive market.

Status

SIGNED

Call topic

ERC-2019-STG

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2019
ERC-2019-STG