VisualGrasping | Visually guided grasping and its effects on visual representations

Summary
I ask how vision guides grasping, and conversely, how learning to grasp objects constrains visual processing. Grasping an object feels effortless, yet the computations underlying grasp planning are nontrivial and there is an extensive literature describing the multifaceted features of visually guided grasping. I aim to bind this fragmented body of knowledge into a unified framework for understanding how humans visually select grasps. To do so I will use motion-tracking hardware (already in place at the University of Giessen) to measure and model human grasping patterns to 3D objects. I will rely on Dr. Fleming’s unique expertise with physical simulation to simulate human grasping with objects varying in shape and material. Joining behavioral measurements with computer simulations will provide a powerful data- and theory-driven approach to fully map out the space of human grasping behavior. The complementary goal of this proposal is to understand how grasping constrains visual processing of object shape and material. I plan to tackle this goal by building a computational model of visual processing for grasp planning. Both Dr. Fleming and I have previous experience with computational modelling of visual function. I will exploit powerful machine learning techniques to infer what kinds of visual representations are necessary for grasp planning. I will train Deep Neural Nets (for which the hardware and software is already in place and in use by the Fleming lab) using extensive physics simulations. Dissecting the learned network architecture and comparing the network’s performance to human behavior will tell us what information about shapes, material, and objects the human visual system encodes to plan motor actions. In short, with this research I aim to determine how processing within the human visual system is shaped by and guides hand motor action.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/793660
Start date: 02-04-2018
End date: 02-07-2020
Total budget - Public funding: 159 460,80 Euro - 159 460,00 Euro
Cordis data

Original description

I ask how vision guides grasping, and conversely, how learning to grasp objects constrains visual processing. Grasping an object feels effortless, yet the computations underlying grasp planning are nontrivial and there is an extensive literature describing the multifaceted features of visually guided grasping. I aim to bind this fragmented body of knowledge into a unified framework for understanding how humans visually select grasps. To do so I will use motion-tracking hardware (already in place at the University of Giessen) to measure and model human grasping patterns to 3D objects. I will rely on Dr. Fleming’s unique expertise with physical simulation to simulate human grasping with objects varying in shape and material. Joining behavioral measurements with computer simulations will provide a powerful data- and theory-driven approach to fully map out the space of human grasping behavior. The complementary goal of this proposal is to understand how grasping constrains visual processing of object shape and material. I plan to tackle this goal by building a computational model of visual processing for grasp planning. Both Dr. Fleming and I have previous experience with computational modelling of visual function. I will exploit powerful machine learning techniques to infer what kinds of visual representations are necessary for grasp planning. I will train Deep Neural Nets (for which the hardware and software is already in place and in use by the Fleming lab) using extensive physics simulations. Dissecting the learned network architecture and comparing the network’s performance to human behavior will tell us what information about shapes, material, and objects the human visual system encodes to plan motor actions. In short, with this research I aim to determine how processing within the human visual system is shaped by and guides hand motor action.

Status

CLOSED

Call topic

MSCA-IF-2017

Update Date

28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.3. EXCELLENT SCIENCE - Marie Skłodowska-Curie Actions (MSCA)
H2020-EU.1.3.2. Nurturing excellence by means of cross-border and cross-sector mobility
H2020-MSCA-IF-2017
MSCA-IF-2017