Task-Oriented Hierarchical Object Decomposition for Visuomotor Control

Jianing Qian, Yunshuang Li, Bernadette Bucher, Dinesh Jayaraman
Proceedings of The 8th Conference on Robot Learning, PMLR 270:1891-1909, 2025.

Abstract

Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant information in the scene, and (2) They often lack the representational capacity to handle unconstrained/complex real-world scenes. Instead, we propose to train a large combinatorial family of representations organized by scene entities: objects and object parts. This hierarchical object decomposition for task-oriented representations (HODOR) permits selectively assembling different representations specific to each task while scaling in representational capacity with the complexity of the scene and the task. In our experiments, we find that HODOR outperforms prior pre-trained representations, both scene vector representations and object-centric representations, for sample-efficient imitation learning across 5 simulated and 5 real-world manipulation tasks. We further find that the invariances captured in HODOR are inherited into downstream policies, which can robustly generalize to out-of-distribution test conditions, permitting zero-shot skill chaining. Appendix and videos: https://sites.google.com/view/ hodor-corl24

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-qian25b, title = {Task-Oriented Hierarchical Object Decomposition for Visuomotor Control}, author = {Qian, Jianing and Li, Yunshuang and Bucher, Bernadette and Jayaraman, Dinesh}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {1891--1909}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/qian25b/qian25b.pdf}, url = {https://proceedings.mlr.press/v270/qian25b.html}, abstract = {Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant information in the scene, and (2) They often lack the representational capacity to handle unconstrained/complex real-world scenes. Instead, we propose to train a large combinatorial family of representations organized by scene entities: objects and object parts. This hierarchical object decomposition for task-oriented representations (HODOR) permits selectively assembling different representations specific to each task while scaling in representational capacity with the complexity of the scene and the task. In our experiments, we find that HODOR outperforms prior pre-trained representations, both scene vector representations and object-centric representations, for sample-efficient imitation learning across 5 simulated and 5 real-world manipulation tasks. We further find that the invariances captured in HODOR are inherited into downstream policies, which can robustly generalize to out-of-distribution test conditions, permitting zero-shot skill chaining. Appendix and videos: https://sites.google.com/view/ hodor-corl24} }
Endnote
%0 Conference Paper %T Task-Oriented Hierarchical Object Decomposition for Visuomotor Control %A Jianing Qian %A Yunshuang Li %A Bernadette Bucher %A Dinesh Jayaraman %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-qian25b %I PMLR %P 1891--1909 %U https://proceedings.mlr.press/v270/qian25b.html %V 270 %X Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant information in the scene, and (2) They often lack the representational capacity to handle unconstrained/complex real-world scenes. Instead, we propose to train a large combinatorial family of representations organized by scene entities: objects and object parts. This hierarchical object decomposition for task-oriented representations (HODOR) permits selectively assembling different representations specific to each task while scaling in representational capacity with the complexity of the scene and the task. In our experiments, we find that HODOR outperforms prior pre-trained representations, both scene vector representations and object-centric representations, for sample-efficient imitation learning across 5 simulated and 5 real-world manipulation tasks. We further find that the invariances captured in HODOR are inherited into downstream policies, which can robustly generalize to out-of-distribution test conditions, permitting zero-shot skill chaining. Appendix and videos: https://sites.google.com/view/ hodor-corl24
APA
Qian, J., Li, Y., Bucher, B. & Jayaraman, D.. (2025). Task-Oriented Hierarchical Object Decomposition for Visuomotor Control. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:1891-1909 Available from https://proceedings.mlr.press/v270/qian25b.html.

Related Material