Summary
Remote digital towers (RDT) are taking place around the world to ensure efficiency and safety. TRUSTY harnesses the power of artificial intelligence (AI) to enhance resilience, capacity, and efficiency in making significant advancements in the deployment of digital towers. The overall goal of TRUSTY is to provide adaptation in the level of transparency and explanation to enhance the trustworthiness of AI-powered decisions in the context of RDT. Through the video transmission data from RDT, TRUSTY considers the following major tasks:
1. Taxiway monitoring (i.e., bird hazard, presence of a drone, autonomous vehicle monitoring, human intrusion, etc.).
2. Runway monitoring (approach and landing) misalignment warning and the corresponding explanation.
To deliver trustworthiness in an AI-powered intelligent system several approaches are considered:
• ‘Self-explainable and Self-learning’ system for critical decision-making
• ‘Transparent ML’ models incorporating interpretability, fairness, and accountability
• ‘Interactive data visualization and HMI dashboard’ for smart and efficient decision support
• ‘Adaptive level of explanation’ regarding the user's cognitive state.
• “Human-centric AI” enhances the trustworthiness of AI-powered systems.
• “Human-AI teaming” to consider users’ feedback to insure some computation flexibility and the users’ acceptability.
To achieve the goal, TRUSTY will rely on the SotA developments in interactive data visualization, and user-centric explanation and on recent technological improvements in accuracy, robustness, interpretability, fairness, and accountability. We will apply information visualization techniques like visual analytics, data-driven storytelling, and immersive analytics in human-machine interactions (HMI). Thus, this project is at the crossroad of trustworthy AI, multi-model machine learning, active learning, and UX for human and AI model interaction.
1. Taxiway monitoring (i.e., bird hazard, presence of a drone, autonomous vehicle monitoring, human intrusion, etc.).
2. Runway monitoring (approach and landing) misalignment warning and the corresponding explanation.
To deliver trustworthiness in an AI-powered intelligent system several approaches are considered:
• ‘Self-explainable and Self-learning’ system for critical decision-making
• ‘Transparent ML’ models incorporating interpretability, fairness, and accountability
• ‘Interactive data visualization and HMI dashboard’ for smart and efficient decision support
• ‘Adaptive level of explanation’ regarding the user's cognitive state.
• “Human-centric AI” enhances the trustworthiness of AI-powered systems.
• “Human-AI teaming” to consider users’ feedback to insure some computation flexibility and the users’ acceptability.
To achieve the goal, TRUSTY will rely on the SotA developments in interactive data visualization, and user-centric explanation and on recent technological improvements in accuracy, robustness, interpretability, fairness, and accountability. We will apply information visualization techniques like visual analytics, data-driven storytelling, and immersive analytics in human-machine interactions (HMI). Thus, this project is at the crossroad of trustworthy AI, multi-model machine learning, active learning, and UX for human and AI model interaction.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101114838 |
Start date: | 01-09-2023 |
End date: | 28-02-2026 |
Total budget - Public funding: | 999 967,50 Euro - 999 967,00 Euro |
Cordis data
Original description
Remote digital towers (RDT) are taking place around the world to ensure efficiency and safety. TRUSTY harnesses the power of artificial intelligence (AI) to enhance resilience, capacity, and efficiency in making significant advancements in the deployment of digital towers. The overall goal of TRUSTY is to provide adaptation in the level of transparency and explanation to enhance the trustworthiness of AI-powered decisions in the context of RDT. Through the video transmission data from RDT, TRUSTY considers the following major tasks:1. Taxiway monitoring (i.e., bird hazard, presence of a drone, autonomous vehicle monitoring, human intrusion, etc.).
2. Runway monitoring (approach and landing) misalignment warning and the corresponding explanation.
To deliver trustworthiness in an AI-powered intelligent system several approaches are considered:
• ‘Self-explainable and Self-learning’ system for critical decision-making
• ‘Transparent ML’ models incorporating interpretability, fairness, and accountability
• ‘Interactive data visualization and HMI dashboard’ for smart and efficient decision support
• ‘Adaptive level of explanation’ regarding the user's cognitive state.
• “Human-centric AI” enhances the trustworthiness of AI-powered systems.
• “Human-AI teaming” to consider users’ feedback to insure some computation flexibility and the users’ acceptability.
To achieve the goal, TRUSTY will rely on the SotA developments in interactive data visualization, and user-centric explanation and on recent technological improvements in accuracy, robustness, interpretability, fairness, and accountability. We will apply information visualization techniques like visual analytics, data-driven storytelling, and immersive analytics in human-machine interactions (HMI). Thus, this project is at the crossroad of trustworthy AI, multi-model machine learning, active learning, and UX for human and AI model interaction.
Status
SIGNEDCall topic
HORIZON-SESAR-2022-DES-ER-01-WA1-7Update Date
31-07-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all