Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation

Qiuyu Chen, Karl Van Wyk, Yu-Wei Chao, Wei Yang, Arsalan Mousavian, Abhishek Gupta, Dieter Fox
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1222-1232, 2023.

Abstract

Dexterous robotic hands have the capability to interact with a wide variety of household objects. However, learning robust real world grasping policies for arbitrary objects has proven challenging due to the difficulty of generating high quality training data. In this work, we propose a learning system (\emph{ISAGrasp}) for leveraging a small number of human demonstrations to bootstrap the generation of a much larger dataset containing successful grasps on a variety of novel objects. Our key insight is to use a correspondence-aware implicit generative model to deform object meshes and demonstrated human grasps in order to create a diverse dataset for supervised learning, while maintaining semantic realism. We use this dataset to train a robust grasping policy in simulation which can be deployed in the real world. We demonstrate grasping performance with a four-fingered Allegro hand in both simulation and the real world, and show this method can handle entirely new semantic classes and achieve a 79% success rate on grasping unseen objects in the real world.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-chen23b, title = {Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation}, author = {Chen, Qiuyu and Wyk, Karl Van and Chao, Yu-Wei and Yang, Wei and Mousavian, Arsalan and Gupta, Abhishek and Fox, Dieter}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1222--1232}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/chen23b/chen23b.pdf}, url = {https://proceedings.mlr.press/v205/chen23b.html}, abstract = {Dexterous robotic hands have the capability to interact with a wide variety of household objects. However, learning robust real world grasping policies for arbitrary objects has proven challenging due to the difficulty of generating high quality training data. In this work, we propose a learning system (\emph{ISAGrasp}) for leveraging a small number of human demonstrations to bootstrap the generation of a much larger dataset containing successful grasps on a variety of novel objects. Our key insight is to use a correspondence-aware implicit generative model to deform object meshes and demonstrated human grasps in order to create a diverse dataset for supervised learning, while maintaining semantic realism. We use this dataset to train a robust grasping policy in simulation which can be deployed in the real world. We demonstrate grasping performance with a four-fingered Allegro hand in both simulation and the real world, and show this method can handle entirely new semantic classes and achieve a 79% success rate on grasping unseen objects in the real world. } }
Endnote
%0 Conference Paper %T Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation %A Qiuyu Chen %A Karl Van Wyk %A Yu-Wei Chao %A Wei Yang %A Arsalan Mousavian %A Abhishek Gupta %A Dieter Fox %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-chen23b %I PMLR %P 1222--1232 %U https://proceedings.mlr.press/v205/chen23b.html %V 205 %X Dexterous robotic hands have the capability to interact with a wide variety of household objects. However, learning robust real world grasping policies for arbitrary objects has proven challenging due to the difficulty of generating high quality training data. In this work, we propose a learning system (\emph{ISAGrasp}) for leveraging a small number of human demonstrations to bootstrap the generation of a much larger dataset containing successful grasps on a variety of novel objects. Our key insight is to use a correspondence-aware implicit generative model to deform object meshes and demonstrated human grasps in order to create a diverse dataset for supervised learning, while maintaining semantic realism. We use this dataset to train a robust grasping policy in simulation which can be deployed in the real world. We demonstrate grasping performance with a four-fingered Allegro hand in both simulation and the real world, and show this method can handle entirely new semantic classes and achieve a 79% success rate on grasping unseen objects in the real world.
APA
Chen, Q., Wyk, K.V., Chao, Y., Yang, W., Mousavian, A., Gupta, A. & Fox, D.. (2023). Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1222-1232 Available from https://proceedings.mlr.press/v205/chen23b.html.

Related Material