INVERSE | INteractive robots that intuitiVely lEarn to inVErt tasks by ReaSoning about their Execution

Summary
Despite the impressive advancements in Artificial Intelligence (AI), current robotic solutions fall short of the expectations when they are requested to operate in partially unknown environments. Most of all, robots lack the cognitive capabilities to understand a task to the point of being able to perform it in a different domain. As humans, during the learning process we gain deep insights on the execution of a process, which allows us to replicate its execution in a different domain with a little effort. We are also able to invert the task execution and to react to contingencies, by focusing the attention to the most critical prediction phases. However, replicating these cognitive processes in AI-driven robots is challenging as it needs a profound rethinking of the robot learning paradigm itself. The robot needs to understand how to act and imagine, like humans do, the possible consequences of its actions in another domain. This demands for a novel framework that embraces different levels of abstraction, starting from physical interaction with the environment, passing through active perception and understanding, and ending-up with decision-making. The INVERSE project aims to provide robots with these essential cognitive abilities by adopting a continual learning approach. After an initial bootstrap phase, used to create initial knowledge from human-level specifications, the robot refines its repertoire by capitalising on its own experience and on human feedback. This experience-driven strategy permits to frame different problems, like performing a task in a different domain, as a problem of fault detection and recovery. Humans have a central role in INVERSE, since their supervision helps limit the complexity of the refinement loop, making the solution suitable for deployment in production scenarios. The effectiveness of developed solutions will be demonstrated in two complementary use cases designed to be a realistic instantiation of the actual work environments.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101136067
Start date: 01-01-2024
End date: 31-12-2027
Total budget - Public funding: 7 999 873,50 Euro - 7 999 873,00 Euro
Cordis data

Original description

Despite the impressive advancements in Artificial Intelligence (AI), current robotic solutions fall short of the expectations when they are requested to operate in partially unknown environments. Most of all, robots lack the cognitive capabilities to understand a task to the point of being able to perform it in a different domain. As humans, during the learning process we gain deep insights on the execution of a process, which allows us to replicate its execution in a different domain with a little effort. We are also able to invert the task execution and to react to contingencies, by focusing the attention to the most critical prediction phases. However, replicating these cognitive processes in AI-driven robots is challenging as it needs a profound rethinking of the robot learning paradigm itself. The robot needs to understand how to act and imagine, like humans do, the possible consequences of its actions in another domain. This demands for a novel framework that embraces different levels of abstraction, starting from physical interaction with the environment, passing through active perception and understanding, and ending-up with decision-making. The INVERSE project aims to provide robots with these essential cognitive abilities by adopting a continual learning approach. After an initial bootstrap phase, used to create initial knowledge from human-level specifications, the robot refines its repertoire by capitalising on its own experience and on human feedback. This experience-driven strategy permits to frame different problems, like performing a task in a different domain, as a problem of fault detection and recovery. Humans have a central role in INVERSE, since their supervision helps limit the complexity of the refinement loop, making the solution suitable for deployment in production scenarios. The effectiveness of developed solutions will be demonstrated in two complementary use cases designed to be a realistic instantiation of the actual work environments.

Status

SIGNED

Call topic

HORIZON-CL4-2023-DIGITAL-EMERGING-01-01

Update Date

12-03-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Artificial Intelligence, Data and Robotics Partnership (ADR)
ADR Partnership Call 2023
HORIZON-CL4-2023-DIGITAL-EMERGING-01-01 Novel paradigms and approaches, towards AI-driven autonomous robots (AI, data and robotics partnership) (RIA)
Horizon Europe
HORIZON.2 Global Challenges and European Industrial Competitiveness
HORIZON.2.4 Digital, Industry and Space
HORIZON.2.4.0 Cross-cutting call topics
HORIZON-CL4-2023-DIGITAL-EMERGING-01
HORIZON-CL4-2023-DIGITAL-EMERGING-01-01 Novel paradigms and approaches, towards AI-driven autonomous robots (AI, data and robotics partnership) (RIA)
HORIZON.2.4.5 Artificial Intelligence and Robotics
HORIZON-CL4-2023-DIGITAL-EMERGING-01
HORIZON-CL4-2023-DIGITAL-EMERGING-01-01 Novel paradigms and approaches, towards AI-driven autonomous robots (AI, data and robotics partnership) (RIA)