Learning Generalizable Manipulation Policies with Object-Centric 3D Representations

Yifeng Zhu, Zhenyu Jiang, Peter Stone, Yuke Zhu
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3418-3433, 2023.

Abstract

We introduce GROOT, an imitation learning method for learning robust policies with object-centric and 3D priors. GROOT builds policies that generalize beyond their initial training conditions for vision-based manipulation. It constructs object-centric 3D representations that are robust toward background changes and camera views and reason over these representations using a transformer-based policy. Furthermore, we introduce a segmentation correspondence model that allows policies to generalize to new objects at test time. Through comprehensive experiments, we validate the robustness of GROOT policies against perceptual variations in simulated and real-world environments. GROOT’s performance excels in generalization over background changes, camera viewpoint shifts, and the presence of new object instances, whereas both state-of-the-art end-to-end learning methods and object proposal-based approaches fall short. We also extensively evaluate GROOT policies on real robots, where we demonstrate the efficacy under very wild changes in setup. More videos and model details can be found in the appendix and the project website https://ut-austin-rpl.github.io/GROOT.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-zhu23b, title = {Learning Generalizable Manipulation Policies with Object-Centric 3D Representations}, author = {Zhu, Yifeng and Jiang, Zhenyu and Stone, Peter and Zhu, Yuke}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3418--3433}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/zhu23b/zhu23b.pdf}, url = {https://proceedings.mlr.press/v229/zhu23b.html}, abstract = {We introduce GROOT, an imitation learning method for learning robust policies with object-centric and 3D priors. GROOT builds policies that generalize beyond their initial training conditions for vision-based manipulation. It constructs object-centric 3D representations that are robust toward background changes and camera views and reason over these representations using a transformer-based policy. Furthermore, we introduce a segmentation correspondence model that allows policies to generalize to new objects at test time. Through comprehensive experiments, we validate the robustness of GROOT policies against perceptual variations in simulated and real-world environments. GROOT’s performance excels in generalization over background changes, camera viewpoint shifts, and the presence of new object instances, whereas both state-of-the-art end-to-end learning methods and object proposal-based approaches fall short. We also extensively evaluate GROOT policies on real robots, where we demonstrate the efficacy under very wild changes in setup. More videos and model details can be found in the appendix and the project website https://ut-austin-rpl.github.io/GROOT.} }
Endnote
%0 Conference Paper %T Learning Generalizable Manipulation Policies with Object-Centric 3D Representations %A Yifeng Zhu %A Zhenyu Jiang %A Peter Stone %A Yuke Zhu %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-zhu23b %I PMLR %P 3418--3433 %U https://proceedings.mlr.press/v229/zhu23b.html %V 229 %X We introduce GROOT, an imitation learning method for learning robust policies with object-centric and 3D priors. GROOT builds policies that generalize beyond their initial training conditions for vision-based manipulation. It constructs object-centric 3D representations that are robust toward background changes and camera views and reason over these representations using a transformer-based policy. Furthermore, we introduce a segmentation correspondence model that allows policies to generalize to new objects at test time. Through comprehensive experiments, we validate the robustness of GROOT policies against perceptual variations in simulated and real-world environments. GROOT’s performance excels in generalization over background changes, camera viewpoint shifts, and the presence of new object instances, whereas both state-of-the-art end-to-end learning methods and object proposal-based approaches fall short. We also extensively evaluate GROOT policies on real robots, where we demonstrate the efficacy under very wild changes in setup. More videos and model details can be found in the appendix and the project website https://ut-austin-rpl.github.io/GROOT.
APA
Zhu, Y., Jiang, Z., Stone, P. & Zhu, Y.. (2023). Learning Generalizable Manipulation Policies with Object-Centric 3D Representations. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3418-3433 Available from https://proceedings.mlr.press/v229/zhu23b.html.

Related Material