V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects

Xingyu Liu, Kris M. Kitani
Proceedings of the 5th Conference on Robot Learning, PMLR 164:287-296, 2022.

Abstract

Manipulating articulated objects requires multiple robot arms in general. It is challenging to enable multiple robot arms to collaboratively complete manipulation tasks on articulated objects. In this paper, we present V-MAO, a framework for learning multi-arm manipulation of articulated objects. Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm. The training signal is obtained from interaction with the simulation environment which is enabled by planning and a novel formulation of object-centric control for articulated objects. We deploy our framework in a customized MuJoCo simulation environment and demonstrate that our framework achieves a high success rate on six different objects and two different robots. We also show that generative modeling can effectively learn the contact point distribution on articulated objects.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-liu22a, title = {V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects}, author = {Liu, Xingyu and Kitani, Kris M.}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {287--296}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/liu22a/liu22a.pdf}, url = {https://proceedings.mlr.press/v164/liu22a.html}, abstract = {Manipulating articulated objects requires multiple robot arms in general. It is challenging to enable multiple robot arms to collaboratively complete manipulation tasks on articulated objects. In this paper, we present V-MAO, a framework for learning multi-arm manipulation of articulated objects. Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm. The training signal is obtained from interaction with the simulation environment which is enabled by planning and a novel formulation of object-centric control for articulated objects. We deploy our framework in a customized MuJoCo simulation environment and demonstrate that our framework achieves a high success rate on six different objects and two different robots. We also show that generative modeling can effectively learn the contact point distribution on articulated objects.} }
Endnote
%0 Conference Paper %T V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects %A Xingyu Liu %A Kris M. Kitani %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-liu22a %I PMLR %P 287--296 %U https://proceedings.mlr.press/v164/liu22a.html %V 164 %X Manipulating articulated objects requires multiple robot arms in general. It is challenging to enable multiple robot arms to collaboratively complete manipulation tasks on articulated objects. In this paper, we present V-MAO, a framework for learning multi-arm manipulation of articulated objects. Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm. The training signal is obtained from interaction with the simulation environment which is enabled by planning and a novel formulation of object-centric control for articulated objects. We deploy our framework in a customized MuJoCo simulation environment and demonstrate that our framework achieves a high success rate on six different objects and two different robots. We also show that generative modeling can effectively learn the contact point distribution on articulated objects.
APA
Liu, X. & Kitani, K.M.. (2022). V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:287-296 Available from https://proceedings.mlr.press/v164/liu22a.html.

Related Material