Summary
Future robots are expected to perform a multitude of complex tasks with high variability, in close collaboration or even physical contact with humans, and in industrial as well as in non-industrial settings. Both human-robot interaction and task variability are major challenges. A lot of progress is needed so that: (1) robots recognize the intention of the human and react with human-like motions; (2) robot end-users, such as operators on the factory floor or people at home, are able to deploy robots for new tasks or new situations in an intuitive way, for example by just demonstrating the task to the robot.
The fundamental challenge addressed in this proposal is: how can a robot generalize a skill that has been demonstrated in a particular situation and apply it to new situations? This project focuses on skills involving rigid objects manipulated by a robot or a human and follows a model-based approach consisting of: (1) conversion of the demonstrated data to an innovative invariant representation of motion and interaction forces; (2) generalization of this representation to a new situation by solving an optimal control problem in which similarity with the invariant representation is maintained while complying with the constraints imposed by the new context. Additional knowledge about the task can be added in the constraints.
Major breakthroughs are that the required number of demonstrations and hence the training effort decrease drastically, similarity with the demonstration is maintained in view of preserving the human-like nature, and task knowledge is easily included.
The methodology is applied to program robot skills involving motion in free space (e.g. human-robot hand over tasks) as well as advanced manipulation skills involving contact (e.g. assembly, cleaning), aiming at impact in industrial and non-industrial settings.
Application of the invariant motion representation in the neighbouring field of biomechanics will further leverage impact.
The fundamental challenge addressed in this proposal is: how can a robot generalize a skill that has been demonstrated in a particular situation and apply it to new situations? This project focuses on skills involving rigid objects manipulated by a robot or a human and follows a model-based approach consisting of: (1) conversion of the demonstrated data to an innovative invariant representation of motion and interaction forces; (2) generalization of this representation to a new situation by solving an optimal control problem in which similarity with the invariant representation is maintained while complying with the constraints imposed by the new context. Additional knowledge about the task can be added in the constraints.
Major breakthroughs are that the required number of demonstrations and hence the training effort decrease drastically, similarity with the demonstration is maintained in view of preserving the human-like nature, and task knowledge is easily included.
The methodology is applied to program robot skills involving motion in free space (e.g. human-robot hand over tasks) as well as advanced manipulation skills involving contact (e.g. assembly, cleaning), aiming at impact in industrial and non-industrial settings.
Application of the invariant motion representation in the neighbouring field of biomechanics will further leverage impact.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/788298 |
Start date: | 01-10-2018 |
End date: | 31-03-2024 |
Total budget - Public funding: | 2 494 971,00 Euro - 2 494 971,00 Euro |
Cordis data
Original description
Future robots are expected to perform a multitude of complex tasks with high variability, in close collaboration or even physical contact with humans, and in industrial as well as in non-industrial settings. Both human-robot interaction and task variability are major challenges. A lot of progress is needed so that: (1) robots recognize the intention of the human and react with human-like motions; (2) robot end-users, such as operators on the factory floor or people at home, are able to deploy robots for new tasks or new situations in an intuitive way, for example by just demonstrating the task to the robot.The fundamental challenge addressed in this proposal is: how can a robot generalize a skill that has been demonstrated in a particular situation and apply it to new situations? This project focuses on skills involving rigid objects manipulated by a robot or a human and follows a model-based approach consisting of: (1) conversion of the demonstrated data to an innovative invariant representation of motion and interaction forces; (2) generalization of this representation to a new situation by solving an optimal control problem in which similarity with the invariant representation is maintained while complying with the constraints imposed by the new context. Additional knowledge about the task can be added in the constraints.
Major breakthroughs are that the required number of demonstrations and hence the training effort decrease drastically, similarity with the demonstration is maintained in view of preserving the human-like nature, and task knowledge is easily included.
The methodology is applied to program robot skills involving motion in free space (e.g. human-robot hand over tasks) as well as advanced manipulation skills involving contact (e.g. assembly, cleaning), aiming at impact in industrial and non-industrial settings.
Application of the invariant motion representation in the neighbouring field of biomechanics will further leverage impact.
Status
CLOSEDCall topic
ERC-2017-ADGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)