A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects

Anthony Simeonov, Yilun Du, Beomjoon Kim, Francois Hogan, Joshua Tenenbaum, Pulkit Agrawal, Alberto Rodriguez
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1582-1601, 2021.

Abstract

We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation. Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics. We show that for rigid-bodies, this abstraction can be realized using low-level manipulation skills that maintain sticking-contact with the object and represent subgoals as 3D transformations. To enable generalization to unseen objects and improve planning performance, we propose a novel way of representing subgoals for rigid-body manipulation and a graph-attention based neural network architecture for processing point-cloud inputs. We experimentally validate these choices using simulated and real-world experiments on the YuMi robot. Results demonstrate that our method can successfully manipulate new objects into target configurations requiring long-term planning. Overall, our framework realizes the best of the worlds of task-and-motion planning (TAMP) and learning-based approaches. Project website: https://anthonysimeonov.github.io/rpo-planning-framework/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-simeonov21a, title = {A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects}, author = {Simeonov, Anthony and Du, Yilun and Kim, Beomjoon and Hogan, Francois and Tenenbaum, Joshua and Agrawal, Pulkit and Rodriguez, Alberto}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1582--1601}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/simeonov21a/simeonov21a.pdf}, url = {https://proceedings.mlr.press/v155/simeonov21a.html}, abstract = {We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation. Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics. We show that for rigid-bodies, this abstraction can be realized using low-level manipulation skills that maintain sticking-contact with the object and represent subgoals as 3D transformations. To enable generalization to unseen objects and improve planning performance, we propose a novel way of representing subgoals for rigid-body manipulation and a graph-attention based neural network architecture for processing point-cloud inputs. We experimentally validate these choices using simulated and real-world experiments on the YuMi robot. Results demonstrate that our method can successfully manipulate new objects into target configurations requiring long-term planning. Overall, our framework realizes the best of the worlds of task-and-motion planning (TAMP) and learning-based approaches. Project website: https://anthonysimeonov.github.io/rpo-planning-framework/.} }
Endnote
%0 Conference Paper %T A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects %A Anthony Simeonov %A Yilun Du %A Beomjoon Kim %A Francois Hogan %A Joshua Tenenbaum %A Pulkit Agrawal %A Alberto Rodriguez %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-simeonov21a %I PMLR %P 1582--1601 %U https://proceedings.mlr.press/v155/simeonov21a.html %V 155 %X We present a framework for solving long-horizon planning problems involving manipulation of rigid objects that operates directly from a point-cloud observation. Our method plans in the space of object subgoals and frees the planner from reasoning about robot-object interaction dynamics. We show that for rigid-bodies, this abstraction can be realized using low-level manipulation skills that maintain sticking-contact with the object and represent subgoals as 3D transformations. To enable generalization to unseen objects and improve planning performance, we propose a novel way of representing subgoals for rigid-body manipulation and a graph-attention based neural network architecture for processing point-cloud inputs. We experimentally validate these choices using simulated and real-world experiments on the YuMi robot. Results demonstrate that our method can successfully manipulate new objects into target configurations requiring long-term planning. Overall, our framework realizes the best of the worlds of task-and-motion planning (TAMP) and learning-based approaches. Project website: https://anthonysimeonov.github.io/rpo-planning-framework/.
APA
Simeonov, A., Du, Y., Kim, B., Hogan, F., Tenenbaum, J., Agrawal, P. & Rodriguez, A.. (2021). A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1582-1601 Available from https://proceedings.mlr.press/v155/simeonov21a.html.

Related Material