Summary
PILLAR-Robots aims at developing a new generation of robots endowed with a higher level of autonomy, that are able to determine their own goals and establish their own strategies, creatively building on the experience acquired during their lifetime to fulfil the desires of their human designers/users in real-life application use-cases brought to TRL5. To this end, the project will operationalize the concept of Purpose, drawn from the cognitive sciences, to increase the autonomy and domain independence of robots during autonomous learning and, at the same time, to lead them to acquire knowledge and skills that are actually relevant for operating in target real applications. In particular, the project will develop algorithms for the acquisition of purpose by the robot, ways to bias the perceptual, motivational and decision systems of the robots’ cognitive architectures towards purposes, and strategies for learning representations, skills and models that allow the execution of purpose-related deliberative and reactive decision processes. Given the aim of reaching TRL5, PILLAR-Robots will implement and validate demonstrators of purposeful lifelong open-ended autonomy using the resulting Purposeful Intrinsically Motivated Cognitive Architecture within three different application fields characterized by different types and levels of variability: Agri-food, Edutainment, and unstructured Industrial/retail. PILLAR-Robots will perform a complete evaluation of the possibilities and impacts of purposeful lifelong open-ended autonomy in these realms from an operational perspective, but also from a market-oriented (with significant productivity gains) and societal (socio-economic, ethical and regulatory) perspective. Engagement of industry and SME players is also expected in order to prepare the ground for further large-scale demonstration.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101070381 |
Start date: | 01-10-2022 |
End date: | 30-09-2026 |
Total budget - Public funding: | 4 990 046,25 Euro - 4 990 046,00 Euro |
Cordis data
Original description
PILLAR-Robots aims at developing a new generation of robots endowed with a higher level of autonomy, that are able to determine their own goals and establish their own strategies, creatively building on the experience acquired during their lifetime to fulfil the desires of their human designers/users in real-life application use-cases brought to TRL5. To this end, the project will operationalize the concept of Purpose, drawn from the cognitive sciences, to increase the autonomy and domain independence of robots during autonomous learning and, at the same time, to lead them to acquire knowledge and skills that are actually relevant for operating in target real applications. In particular, the project will develop algorithms for the acquisition of purpose by the robot, ways to bias the perceptual, motivational and decision systems of the robots’ cognitive architectures towards purposes, and strategies for learning representations, skills and models that allow the execution of purpose-related deliberative and reactive decision processes. Given the aim of reaching TRL5, PILLAR-Robots will implement and validate demonstrators of purposeful lifelong open-ended autonomy using the resulting Purposeful Intrinsically Motivated Cognitive Architecture within three different application fields characterized by different types and levels of variability: Agri-food, Edutainment, and unstructured Industrial/retail. PILLAR-Robots will perform a complete evaluation of the possibilities and impacts of purposeful lifelong open-ended autonomy in these realms from an operational perspective, but also from a market-oriented (with significant productivity gains) and societal (socio-economic, ethical and regulatory) perspective. Engagement of industry and SME players is also expected in order to prepare the ground for further large-scale demonstration.Status
SIGNEDCall topic
HORIZON-CL4-2021-DIGITAL-EMERGING-01-11Update Date
09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all