Summary
All day long, our fingers touch, grasp and move objects in various media such as air, water, oil. We do this almost effortlessly - it feels like we do not spend time planning and reflecting over what our hands and fingers do or how the continuous integration of various sensory modalities such as vision, touch, proprioception, hearing help us to outperform any other biological system in the variety of the interaction tasks that we can execute. Largely overlooked, and perhaps most fascinating is the ease with which we perform these interactions resulting in a belief that these are also easy to accomplish in artificial systems such as robots. However, there are still no robots that can easily hand-wash dishes, button a shirt or peel a potato. Our claim is that this is fundamentally a problem of appropriate representation or parameterization. When interacting with objects, the robot needs to consider geometric, topological, and physical properties of objects. This can be done either explicitly, by modeling and representing these properties, or implicitly, by learning them from data. The main scientific objective of this project is to create new informative and compact representations of deformable objects that incorporate both analytical and learning-based approaches and encode geometric, topological, and physical information about the robot, the object, and the environment. We will do this in the context of challenging multimodal, bimanual object interaction tasks. The focus will be on physical interaction with deformable objects using multimodal feedback. To meet these objectives, we will use theoretical and computational methods together with rigorous experimental evaluation to model skilled sensorimotor behavior in bimanual robot systems.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/884807 |
Start date: | 01-09-2020 |
End date: | 31-08-2025 |
Total budget - Public funding: | 2 424 186,25 Euro - 2 424 186,00 Euro |
Cordis data
Original description
All day long, our fingers touch, grasp and move objects in various media such as air, water, oil. We do this almost effortlessly - it feels like we do not spend time planning and reflecting over what our hands and fingers do or how the continuous integration of various sensory modalities such as vision, touch, proprioception, hearing help us to outperform any other biological system in the variety of the interaction tasks that we can execute. Largely overlooked, and perhaps most fascinating is the ease with which we perform these interactions resulting in a belief that these are also easy to accomplish in artificial systems such as robots. However, there are still no robots that can easily hand-wash dishes, button a shirt or peel a potato. Our claim is that this is fundamentally a problem of appropriate representation or parameterization. When interacting with objects, the robot needs to consider geometric, topological, and physical properties of objects. This can be done either explicitly, by modeling and representing these properties, or implicitly, by learning them from data. The main scientific objective of this project is to create new informative and compact representations of deformable objects that incorporate both analytical and learning-based approaches and encode geometric, topological, and physical information about the robot, the object, and the environment. We will do this in the context of challenging multimodal, bimanual object interaction tasks. The focus will be on physical interaction with deformable objects using multimodal feedback. To meet these objectives, we will use theoretical and computational methods together with rigorous experimental evaluation to model skilled sensorimotor behavior in bimanual robot systems.Status
SIGNEDCall topic
ERC-2019-ADGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)