Hierarchical Equivariant Policy via Frame Transfer

Haibo Zhao, Dian Wang, Yizhe Zhu, Xupeng Zhu, Owen Lewis Howell, Linfeng Zhao, Yaoyao Qian, Robin Walters, Robert Platt
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:77703-77722, 2025.

Abstract

Recent advances in hierarchical policy learning highlight the advantages of decomposing systems into high-level and low-level agents, enabling efficient long-horizon reasoning and precise fine-grained control. However, the interface between these hierarchy levels remains underexplored, and existing hierarchical methods often ignore domain symmetry, resulting in the need for extensive demonstrations to achieve robust performance. To address these issues, we propose Hierarchical Equivariant Policy (HEP), a novel hierarchical policy framework. We propose a frame transfer interface for hierarchical policy learning, which uses the high-level agent’s output as a coordinate frame for the low-level agent, providing a strong inductive bias while retaining flexibility. Additionally, we integrate domain symmetries into both levels and theoretically demonstrate the system’s overall equivariance. HEP achieves state-of-the-art performance in complex robotic manipulation tasks, demonstrating significant improvements in both simulation and real-world settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhao25u, title = {Hierarchical Equivariant Policy via Frame Transfer}, author = {Zhao, Haibo and Wang, Dian and Zhu, Yizhe and Zhu, Xupeng and Howell, Owen Lewis and Zhao, Linfeng and Qian, Yaoyao and Walters, Robin and Platt, Robert}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {77703--77722}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhao25u/zhao25u.pdf}, url = {https://proceedings.mlr.press/v267/zhao25u.html}, abstract = {Recent advances in hierarchical policy learning highlight the advantages of decomposing systems into high-level and low-level agents, enabling efficient long-horizon reasoning and precise fine-grained control. However, the interface between these hierarchy levels remains underexplored, and existing hierarchical methods often ignore domain symmetry, resulting in the need for extensive demonstrations to achieve robust performance. To address these issues, we propose Hierarchical Equivariant Policy (HEP), a novel hierarchical policy framework. We propose a frame transfer interface for hierarchical policy learning, which uses the high-level agent’s output as a coordinate frame for the low-level agent, providing a strong inductive bias while retaining flexibility. Additionally, we integrate domain symmetries into both levels and theoretically demonstrate the system’s overall equivariance. HEP achieves state-of-the-art performance in complex robotic manipulation tasks, demonstrating significant improvements in both simulation and real-world settings.} }
Endnote
%0 Conference Paper %T Hierarchical Equivariant Policy via Frame Transfer %A Haibo Zhao %A Dian Wang %A Yizhe Zhu %A Xupeng Zhu %A Owen Lewis Howell %A Linfeng Zhao %A Yaoyao Qian %A Robin Walters %A Robert Platt %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhao25u %I PMLR %P 77703--77722 %U https://proceedings.mlr.press/v267/zhao25u.html %V 267 %X Recent advances in hierarchical policy learning highlight the advantages of decomposing systems into high-level and low-level agents, enabling efficient long-horizon reasoning and precise fine-grained control. However, the interface between these hierarchy levels remains underexplored, and existing hierarchical methods often ignore domain symmetry, resulting in the need for extensive demonstrations to achieve robust performance. To address these issues, we propose Hierarchical Equivariant Policy (HEP), a novel hierarchical policy framework. We propose a frame transfer interface for hierarchical policy learning, which uses the high-level agent’s output as a coordinate frame for the low-level agent, providing a strong inductive bias while retaining flexibility. Additionally, we integrate domain symmetries into both levels and theoretically demonstrate the system’s overall equivariance. HEP achieves state-of-the-art performance in complex robotic manipulation tasks, demonstrating significant improvements in both simulation and real-world settings.
APA
Zhao, H., Wang, D., Zhu, Y., Zhu, X., Howell, O.L., Zhao, L., Qian, Y., Walters, R. & Platt, R.. (2025). Hierarchical Equivariant Policy via Frame Transfer. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:77703-77722 Available from https://proceedings.mlr.press/v267/zhao25u.html.

Related Material