Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

Yueh-Hua Wu, Jiashun Wang, Xiaolong Wang
Proceedings of The 6th Conference on Robot Learning, PMLR 205:618-629, 2023.

Abstract

Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics. While recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning, the learned policy can hardly generalize to manipulate novel objects, given limited expert demonstrations. In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model. This generalizes the policy to novel object instances within the same category. To train the policy, we propose a novel imitation learning objective jointly with a geometric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance of 3D object representation learning for manipulation. We include videos and code on the project website: https://kristery.github.io/ILAD/ .

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-wu23a, title = {Learning Generalizable Dexterous Manipulation from Human Grasp Affordance}, author = {Wu, Yueh-Hua and Wang, Jiashun and Wang, Xiaolong}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {618--629}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/wu23a/wu23a.pdf}, url = {https://proceedings.mlr.press/v205/wu23a.html}, abstract = {Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics. While recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning, the learned policy can hardly generalize to manipulate novel objects, given limited expert demonstrations. In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model. This generalizes the policy to novel object instances within the same category. To train the policy, we propose a novel imitation learning objective jointly with a geometric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance of 3D object representation learning for manipulation. We include videos and code on the project website: https://kristery.github.io/ILAD/ .} }
Endnote
%0 Conference Paper %T Learning Generalizable Dexterous Manipulation from Human Grasp Affordance %A Yueh-Hua Wu %A Jiashun Wang %A Xiaolong Wang %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-wu23a %I PMLR %P 618--629 %U https://proceedings.mlr.press/v205/wu23a.html %V 205 %X Dexterous manipulation with a multi-finger hand is one of the most challenging problems in robotics. While recent progress in imitation learning has largely improved the sample efficiency compared to Reinforcement Learning, the learned policy can hardly generalize to manipulate novel objects, given limited expert demonstrations. In this paper, we propose to learn dexterous manipulation using large-scale demonstrations with diverse 3D objects in a category, which are generated from a human grasp affordance model. This generalizes the policy to novel object instances within the same category. To train the policy, we propose a novel imitation learning objective jointly with a geometric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance of 3D object representation learning for manipulation. We include videos and code on the project website: https://kristery.github.io/ILAD/ .
APA
Wu, Y., Wang, J. & Wang, X.. (2023). Learning Generalizable Dexterous Manipulation from Human Grasp Affordance. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:618-629 Available from https://proceedings.mlr.press/v205/wu23a.html.

Related Material