Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

Eric Jang, Coline Devin, Vincent Vanhoucke, Sergey Levine
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:99-112, 2018.

Abstract

Well structured visual representations can make robot learning faster and can improve generalization. In this paper, we study how we can acquire effective object-centric representations for robotic manipulation tasks without human labeling by using autonomous robot interaction with the environment. Such representation learning methods can benefit from continuous refinement of the representation as the robot collects more experience, allowing them to scale effectively without human intervention. Our representation learning approach is based on object persistence: when a robot removes an object from a scene, the representation of that scene should change according to the features of the object that was removed. We formulate an arithmetic relationship between feature vectors from this observation, and use it to learn a representation of scenes and objects that can then be used to identify object instances, localize them in the scene, and perform goal-directed grasping tasks where the robot must retrieve commanded objects from a bin. The same grasping procedure can also be used to automatically collect training data for our method, by recording images of scenes, grasping and removing an object, and recording the outcome. Our experiments demonstrate that this self-supervised approach for tasked grasping substantially outperforms direct reinforcement learning from images and prior representation learning methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-jang18a, title = {Grasp2Vec: Learning Object Representations from Self-Supervised Grasping}, author = {Jang, Eric and Devin, Coline and Vanhoucke, Vincent and Levine, Sergey}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {99--112}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/jang18a/jang18a.pdf}, url = {https://proceedings.mlr.press/v87/jang18a.html}, abstract = {Well structured visual representations can make robot learning faster and can improve generalization. In this paper, we study how we can acquire effective object-centric representations for robotic manipulation tasks without human labeling by using autonomous robot interaction with the environment. Such representation learning methods can benefit from continuous refinement of the representation as the robot collects more experience, allowing them to scale effectively without human intervention. Our representation learning approach is based on object persistence: when a robot removes an object from a scene, the representation of that scene should change according to the features of the object that was removed. We formulate an arithmetic relationship between feature vectors from this observation, and use it to learn a representation of scenes and objects that can then be used to identify object instances, localize them in the scene, and perform goal-directed grasping tasks where the robot must retrieve commanded objects from a bin. The same grasping procedure can also be used to automatically collect training data for our method, by recording images of scenes, grasping and removing an object, and recording the outcome. Our experiments demonstrate that this self-supervised approach for tasked grasping substantially outperforms direct reinforcement learning from images and prior representation learning methods. } }
Endnote
%0 Conference Paper %T Grasp2Vec: Learning Object Representations from Self-Supervised Grasping %A Eric Jang %A Coline Devin %A Vincent Vanhoucke %A Sergey Levine %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-jang18a %I PMLR %P 99--112 %U https://proceedings.mlr.press/v87/jang18a.html %V 87 %X Well structured visual representations can make robot learning faster and can improve generalization. In this paper, we study how we can acquire effective object-centric representations for robotic manipulation tasks without human labeling by using autonomous robot interaction with the environment. Such representation learning methods can benefit from continuous refinement of the representation as the robot collects more experience, allowing them to scale effectively without human intervention. Our representation learning approach is based on object persistence: when a robot removes an object from a scene, the representation of that scene should change according to the features of the object that was removed. We formulate an arithmetic relationship between feature vectors from this observation, and use it to learn a representation of scenes and objects that can then be used to identify object instances, localize them in the scene, and perform goal-directed grasping tasks where the robot must retrieve commanded objects from a bin. The same grasping procedure can also be used to automatically collect training data for our method, by recording images of scenes, grasping and removing an object, and recording the outcome. Our experiments demonstrate that this self-supervised approach for tasked grasping substantially outperforms direct reinforcement learning from images and prior representation learning methods.
APA
Jang, E., Devin, C., Vanhoucke, V. & Levine, S.. (2018). Grasp2Vec: Learning Object Representations from Self-Supervised Grasping. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:99-112 Available from https://proceedings.mlr.press/v87/jang18a.html.

Related Material