FlowBot++: Learning Generalized Articulated Objects Manipulation via Articulation Projection

Harry Zhang, Ben Eisner, David Held
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1222-1241, 2023.

Abstract

Understanding and manipulating articulated objects, such as doors and drawers, is crucial for robots operating in human environments. We wish to develop a system that can learn to articulate novel objects with no prior interaction, after training on other articulated objects. Previous approaches for articulated object manipulation rely on either modular methods which are brittle or end-to-end methods, which lack generalizability. This paper presents FlowBot++, a deep 3D vision-based robotic system that predicts dense per-point motion and dense articulation parameters of articulated objects to assist in downstream manipulation tasks. FlowBot++ introduces a novel per-point representation of the articulated motion and articulation parameters that are combined to produce a more accurate estimate than either method on their own. Simulated experiments on the PartNet-Mobility dataset validate the performance of our system in articulating a wide range of objects, while real-world experiments on real objects’ point clouds and a Sawyer robot demonstrate the generalizability and feasibility of our system in real-world scenarios. Videos are available on our anonymized website https://sites.google.com/view/flowbotpp/home

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-zhang23c, title = {FlowBot++: Learning Generalized Articulated Objects Manipulation via Articulation Projection}, author = {Zhang, Harry and Eisner, Ben and Held, David}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1222--1241}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/zhang23c/zhang23c.pdf}, url = {https://proceedings.mlr.press/v229/zhang23c.html}, abstract = {Understanding and manipulating articulated objects, such as doors and drawers, is crucial for robots operating in human environments. We wish to develop a system that can learn to articulate novel objects with no prior interaction, after training on other articulated objects. Previous approaches for articulated object manipulation rely on either modular methods which are brittle or end-to-end methods, which lack generalizability. This paper presents FlowBot++, a deep 3D vision-based robotic system that predicts dense per-point motion and dense articulation parameters of articulated objects to assist in downstream manipulation tasks. FlowBot++ introduces a novel per-point representation of the articulated motion and articulation parameters that are combined to produce a more accurate estimate than either method on their own. Simulated experiments on the PartNet-Mobility dataset validate the performance of our system in articulating a wide range of objects, while real-world experiments on real objects’ point clouds and a Sawyer robot demonstrate the generalizability and feasibility of our system in real-world scenarios. Videos are available on our anonymized website https://sites.google.com/view/flowbotpp/home} }
Endnote
%0 Conference Paper %T FlowBot++: Learning Generalized Articulated Objects Manipulation via Articulation Projection %A Harry Zhang %A Ben Eisner %A David Held %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-zhang23c %I PMLR %P 1222--1241 %U https://proceedings.mlr.press/v229/zhang23c.html %V 229 %X Understanding and manipulating articulated objects, such as doors and drawers, is crucial for robots operating in human environments. We wish to develop a system that can learn to articulate novel objects with no prior interaction, after training on other articulated objects. Previous approaches for articulated object manipulation rely on either modular methods which are brittle or end-to-end methods, which lack generalizability. This paper presents FlowBot++, a deep 3D vision-based robotic system that predicts dense per-point motion and dense articulation parameters of articulated objects to assist in downstream manipulation tasks. FlowBot++ introduces a novel per-point representation of the articulated motion and articulation parameters that are combined to produce a more accurate estimate than either method on their own. Simulated experiments on the PartNet-Mobility dataset validate the performance of our system in articulating a wide range of objects, while real-world experiments on real objects’ point clouds and a Sawyer robot demonstrate the generalizability and feasibility of our system in real-world scenarios. Videos are available on our anonymized website https://sites.google.com/view/flowbotpp/home
APA
Zhang, H., Eisner, B. & Held, D.. (2023). FlowBot++: Learning Generalized Articulated Objects Manipulation via Articulation Projection. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1222-1241 Available from https://proceedings.mlr.press/v229/zhang23c.html.

Related Material