OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning

Tairan He, Zhengyi Luo, Xialin He, Wenli Xiao, Chong Zhang, Weinan Zhang, Kris M. Kitani, Changliu Liu, Guanya Shi
Proceedings of The 8th Conference on Robot Learning, PMLR 270:1516-1540, 2025.

Abstract

We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4. OmniH2O demonstrates versatility and dexterity in various real-world whole-body tasks through teleoperation or autonomy, such as playing multiple sports, moving and manipulating objects, and interacting with humans. We develop an RL-based sim-to-real pipeline, which involves large-scale retargeting and augmentation of human motion datasets, learning a real-world deployable policy with sparse sensor input by imitating a privileged teacher policy, and reward designs to enhance robustness and stability. We release the first humanoid whole-body control dataset, OmniH2O-6, containing six everyday tasks, and demonstrate humanoid whole-body skill learning from teleoperated datasets. Videos at the anonymous website [https://anonymous-omni-h2o.github.io/](https://anonymous-omni-h2o.github.io/)

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-he25b, title = {OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning}, author = {He, Tairan and Luo, Zhengyi and He, Xialin and Xiao, Wenli and Zhang, Chong and Zhang, Weinan and Kitani, Kris M. and Liu, Changliu and Shi, Guanya}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {1516--1540}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/he25b/he25b.pdf}, url = {https://proceedings.mlr.press/v270/he25b.html}, abstract = {We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4. OmniH2O demonstrates versatility and dexterity in various real-world whole-body tasks through teleoperation or autonomy, such as playing multiple sports, moving and manipulating objects, and interacting with humans. We develop an RL-based sim-to-real pipeline, which involves large-scale retargeting and augmentation of human motion datasets, learning a real-world deployable policy with sparse sensor input by imitating a privileged teacher policy, and reward designs to enhance robustness and stability. We release the first humanoid whole-body control dataset, OmniH2O-6, containing six everyday tasks, and demonstrate humanoid whole-body skill learning from teleoperated datasets. Videos at the anonymous website [https://anonymous-omni-h2o.github.io/](https://anonymous-omni-h2o.github.io/)} }
Endnote
%0 Conference Paper %T OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning %A Tairan He %A Zhengyi Luo %A Xialin He %A Wenli Xiao %A Chong Zhang %A Weinan Zhang %A Kris M. Kitani %A Changliu Liu %A Guanya Shi %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-he25b %I PMLR %P 1516--1540 %U https://proceedings.mlr.press/v270/he25b.html %V 270 %X We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4. OmniH2O demonstrates versatility and dexterity in various real-world whole-body tasks through teleoperation or autonomy, such as playing multiple sports, moving and manipulating objects, and interacting with humans. We develop an RL-based sim-to-real pipeline, which involves large-scale retargeting and augmentation of human motion datasets, learning a real-world deployable policy with sparse sensor input by imitating a privileged teacher policy, and reward designs to enhance robustness and stability. We release the first humanoid whole-body control dataset, OmniH2O-6, containing six everyday tasks, and demonstrate humanoid whole-body skill learning from teleoperated datasets. Videos at the anonymous website [https://anonymous-omni-h2o.github.io/](https://anonymous-omni-h2o.github.io/)
APA
He, T., Luo, Z., He, X., Xiao, W., Zhang, C., Zhang, W., Kitani, K.M., Liu, C. & Shi, G.. (2025). OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:1516-1540 Available from https://proceedings.mlr.press/v270/he25b.html.

Related Material