Learning a visuomotor controller for real world robotic grasping using simulated depth images

Ulrich Viereck, Andreas Pas, Kate Saenko, Robert Platt
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:291-300, 2017.

Abstract

We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-viereck17a, title = {Learning a visuomotor controller for real world robotic grasping using simulated depth images}, author = {Viereck, Ulrich and Pas, Andreas and Saenko, Kate and Platt, Robert}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {291--300}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/viereck17a/viereck17a.pdf}, url = {https://proceedings.mlr.press/v78/viereck17a.html}, abstract = {We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.} }
Endnote
%0 Conference Paper %T Learning a visuomotor controller for real world robotic grasping using simulated depth images %A Ulrich Viereck %A Andreas Pas %A Kate Saenko %A Robert Platt %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-viereck17a %I PMLR %P 291--300 %U https://proceedings.mlr.press/v78/viereck17a.html %V 78 %X We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.
APA
Viereck, U., Pas, A., Saenko, K. & Platt, R.. (2017). Learning a visuomotor controller for real world robotic grasping using simulated depth images. Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:291-300 Available from https://proceedings.mlr.press/v78/viereck17a.html.

Related Material