CONVEY | Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI

Summary
From self-driving cars to agents recommending medical treatment, Artificial Intelligence (AI) agents are becoming increasingly prevalent. These agents have the potential to benefit society in areas such as transportation, healthcare and education. Importantly, they do not operate in a vacuum—people interact with agents in a wide range of settings. To effectively interact with agents, people need to be able to anticipate and understand their behavior. For example, a driver of an autonomous vehicle will need to anticipate situations in which the carfails and hands over control, while a clinician will need to understand the treatment regime recommended by an agent to determine whether it aligns with the patient’s preferences.

Explainable AI methods aim to support users by making the behavior of AI systems more transparent. However, the state-of-the-art in explainable AI is lacking in several key aspects. First, the majority of existing methods focus on providing “local” explanations to one-shot decisions of machine learning models. They are not adequate for conveying the behavior of agents that act over an extended time duration in large state spaces. Second, most existing methods do not consider the context in which explanations are deployed, including to the specific needs and characteristics of users. Finally, most methods are not interactive, limiting users’ ability to gain a thorough understanding of the agents.

The overarching objective of this proposal is to develop adaptive and interactive methods for conveying the behavior of agents and multi-agent teams operating in sequential decision-making settings. To tackle this challenge, the proposed research will draw on insights and methodologies from AI and human-computer interaction. It will develop algorithms that determine what information about agents’ behavior to share with users, tailored to users’ needs and characteristics, and interfaces that allow users to proactively explore agents’ capabilities.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101078158
Start date: 01-01-2023
End date: 31-12-2027
Total budget - Public funding: 1 470 250,00 Euro - 1 470 250,00 Euro
Cordis data

Original description

From self-driving cars to agents recommending medical treatment, Artificial Intelligence (AI) agents are becoming increasingly prevalent. These agents have the potential to benefit society in areas such as transportation, healthcare and education. Importantly, they do not operate in a vacuum—people interact with agents in a wide range of settings. To effectively interact with agents, people need to be able to anticipate and understand their behavior. For example, a driver of an autonomous vehicle will need to anticipate situations in which the carfails and hands over control, while a clinician will need to understand the treatment regime recommended by an agent to determine whether it aligns with the patient’s preferences.

Explainable AI methods aim to support users by making the behavior of AI systems more transparent. However, the state-of-the-art in explainable AI is lacking in several key aspects. First, the majority of existing methods focus on providing “local” explanations to one-shot decisions of machine learning models. They are not adequate for conveying the behavior of agents that act over an extended time duration in large state spaces. Second, most existing methods do not consider the context in which explanations are deployed, including to the specific needs and characteristics of users. Finally, most methods are not interactive, limiting users’ ability to gain a thorough understanding of the agents.

The overarching objective of this proposal is to develop adaptive and interactive methods for conveying the behavior of agents and multi-agent teams operating in sequential decision-making settings. To tackle this challenge, the proposed research will draw on insights and methodologies from AI and human-computer interaction. It will develop algorithms that determine what information about agents’ behavior to share with users, tailored to users’ needs and characteristics, and interfaces that allow users to proactively explore agents’ capabilities.

Status

SIGNED

Call topic

ERC-2022-STG

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2022-STG ERC STARTING GRANTS
HORIZON.1.1.1 Frontier science
ERC-2022-STG ERC STARTING GRANTS