Summary
MAHALO asks a simple but profound question: in the emerging age of Machine Learning (ML), should we be developing automation that matches human behavior (i.e., conformal), or automation that is understandable to the human (i.e., transparent)? Further, what tradeoffs exist, in terms of controller trust, acceptance, and performance? To answer these questions, MAHALO will:
• Develop an individually-tuned ML system comprised of layered deep learning and reinforcement models, trained on controller performance (context-specific solutions), strategies (eye tracking), and physiological data, which learns to solve ATC conflicts;
• Couple this to an enhanced en-route CD&R prototype display to present machine rationale with regards to ML output;
• Evaluate in realtime simulations the relative impact of ML conformance, transparency, and traffic complexity, on controller understanding, trust, acceptance, workload, and performance; and
• Define a framework to guide design of future AI systems, including guidance on the effects of conformance, transparency, complexity, and non-nominal conditions.
Building on the collective experience within the team, past research, and recent advances in the areas of ML and ecological interface design (EID), MAHALO will take a bold step forward: to create a system that learns from the individual operator, but also provides the operator insight into what the machine has learnt. Several models will be trained and evaluated to reflect a continuum from individually-matched to group-average. Most recent work in areas of automation transparency, Explainable AI (XAI) and ML interpretability will be explored to afford understanding of ML advisories. The user interface will present ML outputs, in terms of: current and future (what-if) traffic patterns; intended resolution maneuvers; and rule-based rationale. The project’s output will add knowledge and design principles on how AI and transparency can be used to improve ATM performance, capacity, and safety.
• Develop an individually-tuned ML system comprised of layered deep learning and reinforcement models, trained on controller performance (context-specific solutions), strategies (eye tracking), and physiological data, which learns to solve ATC conflicts;
• Couple this to an enhanced en-route CD&R prototype display to present machine rationale with regards to ML output;
• Evaluate in realtime simulations the relative impact of ML conformance, transparency, and traffic complexity, on controller understanding, trust, acceptance, workload, and performance; and
• Define a framework to guide design of future AI systems, including guidance on the effects of conformance, transparency, complexity, and non-nominal conditions.
Building on the collective experience within the team, past research, and recent advances in the areas of ML and ecological interface design (EID), MAHALO will take a bold step forward: to create a system that learns from the individual operator, but also provides the operator insight into what the machine has learnt. Several models will be trained and evaluated to reflect a continuum from individually-matched to group-average. Most recent work in areas of automation transparency, Explainable AI (XAI) and ML interpretability will be explored to afford understanding of ML advisories. The user interface will present ML outputs, in terms of: current and future (what-if) traffic patterns; intended resolution maneuvers; and rule-based rationale. The project’s output will add knowledge and design principles on how AI and transparency can be used to improve ATM performance, capacity, and safety.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/892970 |
Start date: | 01-06-2020 |
End date: | 30-11-2022 |
Total budget - Public funding: | 997 212,00 Euro - 997 212,00 Euro |
Cordis data
Original description
MAHALO asks a simple but profound question: in the emerging age of Machine Learning (ML), should we be developing automation that matches human behavior (i.e., conformal), or automation that is understandable to the human (i.e., transparent)? Further, what tradeoffs exist, in terms of controller trust, acceptance, and performance? To answer these questions, MAHALO will:• Develop an individually-tuned ML system comprised of layered deep learning and reinforcement models, trained on controller performance (context-specific solutions), strategies (eye tracking), and physiological data, which learns to solve ATC conflicts;
• Couple this to an enhanced en-route CD&R prototype display to present machine rationale with regards to ML output;
• Evaluate in realtime simulations the relative impact of ML conformance, transparency, and traffic complexity, on controller understanding, trust, acceptance, workload, and performance; and
• Define a framework to guide design of future AI systems, including guidance on the effects of conformance, transparency, complexity, and non-nominal conditions.
Building on the collective experience within the team, past research, and recent advances in the areas of ML and ecological interface design (EID), MAHALO will take a bold step forward: to create a system that learns from the individual operator, but also provides the operator insight into what the machine has learnt. Several models will be trained and evaluated to reflect a continuum from individually-matched to group-average. Most recent work in areas of automation transparency, Explainable AI (XAI) and ML interpretability will be explored to afford understanding of ML advisories. The user interface will present ML outputs, in terms of: current and future (what-if) traffic patterns; intended resolution maneuvers; and rule-based rationale. The project’s output will add knowledge and design principles on how AI and transparency can be used to improve ATM performance, capacity, and safety.
Status
CLOSEDCall topic
SESAR-ER4-01-2019Update Date
26-10-2022
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all