Geometry Matching for Multi-Embodiment Grasping

Maria Attarian, Muhammad Adil Asif, Jingzhou Liu, Ruthrash Hari, Animesh Garg, Igor Gilitschenski, Jonathan Tompson
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1242-1256, 2023.

Abstract

While significant progress has been made on the problem of generating grasps, many existing learning-based approaches still concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. In this paper, we tackle the problem of grasping multi-embodiments through the viewpoint of learning rich geometric representations for both objects and end-effectors using Graph Neural Networks (GNN). Our novel method – GeoMatch – applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive prediction of grasps keypoint-by-keypoint. We compare our method against 3 baselines that provide multi-embodiment support. Our approach performs better across 3 end-effectors, while also providing competitive diversity of grasps. Examples can be found at geomatch.github.io.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-attarian23a, title = {Geometry Matching for Multi-Embodiment Grasping}, author = {Attarian, Maria and Asif, Muhammad Adil and Liu, Jingzhou and Hari, Ruthrash and Garg, Animesh and Gilitschenski, Igor and Tompson, Jonathan}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1242--1256}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/attarian23a/attarian23a.pdf}, url = {https://proceedings.mlr.press/v229/attarian23a.html}, abstract = {While significant progress has been made on the problem of generating grasps, many existing learning-based approaches still concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. In this paper, we tackle the problem of grasping multi-embodiments through the viewpoint of learning rich geometric representations for both objects and end-effectors using Graph Neural Networks (GNN). Our novel method – GeoMatch – applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive prediction of grasps keypoint-by-keypoint. We compare our method against 3 baselines that provide multi-embodiment support. Our approach performs better across 3 end-effectors, while also providing competitive diversity of grasps. Examples can be found at geomatch.github.io.} }
Endnote
%0 Conference Paper %T Geometry Matching for Multi-Embodiment Grasping %A Maria Attarian %A Muhammad Adil Asif %A Jingzhou Liu %A Ruthrash Hari %A Animesh Garg %A Igor Gilitschenski %A Jonathan Tompson %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-attarian23a %I PMLR %P 1242--1256 %U https://proceedings.mlr.press/v229/attarian23a.html %V 229 %X While significant progress has been made on the problem of generating grasps, many existing learning-based approaches still concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. In this paper, we tackle the problem of grasping multi-embodiments through the viewpoint of learning rich geometric representations for both objects and end-effectors using Graph Neural Networks (GNN). Our novel method – GeoMatch – applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive prediction of grasps keypoint-by-keypoint. We compare our method against 3 baselines that provide multi-embodiment support. Our approach performs better across 3 end-effectors, while also providing competitive diversity of grasps. Examples can be found at geomatch.github.io.
APA
Attarian, M., Asif, M.A., Liu, J., Hari, R., Garg, A., Gilitschenski, I. & Tompson, J.. (2023). Geometry Matching for Multi-Embodiment Grasping. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1242-1256 Available from https://proceedings.mlr.press/v229/attarian23a.html.

Related Material