HuB: Learning Extreme Humanoid Balance

Tong Zhang, Boyuan Zheng, Ruiqian Nai, Yingdong Hu, Yen-Jen Wang, Geng Chen, Fanqi Lin, Jiongye Li, Chuye Hong, Koushil Sreenath, Yang Gao
Proceedings of The 9th Conference on Robot Learning, PMLR 305:686-704, 2025.

Abstract

The human body demonstrates exceptional motor capabilities—such as standing steadily on one foot or performing a high kick with the leg raised over 1.5 meters—both requiring precise balance control. While recent research on humanoid control has leveraged reinforcement learning to track human motions for skill acquisition, applying this paradigm to balance-intensive tasks remains challenging. In this work, we identify three key obstacles: instability from reference motion errors, learning difficulties due to morphological mismatch, and the sim-to-real gap caused by sensor noise and unmodeled dynamics. To address these challenges, we propose $\textbf{HuB}$ ($\textbf{Hu}$manoid $\textbf{B}$alance), a unified framework that integrates $\textit{reference motion refinement}$, $\textit{balance-aware policy learning}$, and $\textit{sim-to-real robustness training}$, with each component targeting a specific challenge. We validate our approach on the Unitree G1 humanoid robot across challenging quasi-static balance tasks, including extreme single-legged poses such as $\texttt{Swallow Balance}$ and $\texttt{Bruce Lee’s Kick}$. Our policy remains stable even under strong physical disturbances—such as a forceful soccer strike—while baseline methods consistently fail to complete these tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-zhang25b, title = {HuB: Learning Extreme Humanoid Balance}, author = {Zhang, Tong and Zheng, Boyuan and Nai, Ruiqian and Hu, Yingdong and Wang, Yen-Jen and Chen, Geng and Lin, Fanqi and Li, Jiongye and Hong, Chuye and Sreenath, Koushil and Gao, Yang}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {686--704}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/zhang25b/zhang25b.pdf}, url = {https://proceedings.mlr.press/v305/zhang25b.html}, abstract = {The human body demonstrates exceptional motor capabilities—such as standing steadily on one foot or performing a high kick with the leg raised over 1.5 meters—both requiring precise balance control. While recent research on humanoid control has leveraged reinforcement learning to track human motions for skill acquisition, applying this paradigm to balance-intensive tasks remains challenging. In this work, we identify three key obstacles: instability from reference motion errors, learning difficulties due to morphological mismatch, and the sim-to-real gap caused by sensor noise and unmodeled dynamics. To address these challenges, we propose $\textbf{HuB}$ ($\textbf{Hu}$manoid $\textbf{B}$alance), a unified framework that integrates $\textit{reference motion refinement}$, $\textit{balance-aware policy learning}$, and $\textit{sim-to-real robustness training}$, with each component targeting a specific challenge. We validate our approach on the Unitree G1 humanoid robot across challenging quasi-static balance tasks, including extreme single-legged poses such as $\texttt{Swallow Balance}$ and $\texttt{Bruce Lee’s Kick}$. Our policy remains stable even under strong physical disturbances—such as a forceful soccer strike—while baseline methods consistently fail to complete these tasks.} }
Endnote
%0 Conference Paper %T HuB: Learning Extreme Humanoid Balance %A Tong Zhang %A Boyuan Zheng %A Ruiqian Nai %A Yingdong Hu %A Yen-Jen Wang %A Geng Chen %A Fanqi Lin %A Jiongye Li %A Chuye Hong %A Koushil Sreenath %A Yang Gao %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-zhang25b %I PMLR %P 686--704 %U https://proceedings.mlr.press/v305/zhang25b.html %V 305 %X The human body demonstrates exceptional motor capabilities—such as standing steadily on one foot or performing a high kick with the leg raised over 1.5 meters—both requiring precise balance control. While recent research on humanoid control has leveraged reinforcement learning to track human motions for skill acquisition, applying this paradigm to balance-intensive tasks remains challenging. In this work, we identify three key obstacles: instability from reference motion errors, learning difficulties due to morphological mismatch, and the sim-to-real gap caused by sensor noise and unmodeled dynamics. To address these challenges, we propose $\textbf{HuB}$ ($\textbf{Hu}$manoid $\textbf{B}$alance), a unified framework that integrates $\textit{reference motion refinement}$, $\textit{balance-aware policy learning}$, and $\textit{sim-to-real robustness training}$, with each component targeting a specific challenge. We validate our approach on the Unitree G1 humanoid robot across challenging quasi-static balance tasks, including extreme single-legged poses such as $\texttt{Swallow Balance}$ and $\texttt{Bruce Lee’s Kick}$. Our policy remains stable even under strong physical disturbances—such as a forceful soccer strike—while baseline methods consistently fail to complete these tasks.
APA
Zhang, T., Zheng, B., Nai, R., Hu, Y., Wang, Y., Chen, G., Lin, F., Li, J., Hong, C., Sreenath, K. & Gao, Y.. (2025). HuB: Learning Extreme Humanoid Balance. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:686-704 Available from https://proceedings.mlr.press/v305/zhang25b.html.

Related Material