Visual Imitation Enables Contextual Humanoid Control

Arthur Allshire, Hongsuk Choi, Junyi Zhang, David McAllister, Anthony Zhang, Chung Min Kim, Trevor Darrell, Pieter Abbeel, Jitendra Malik, Angjoo Kanazawa
Proceedings of The 9th Conference on Robot Learning, PMLR 305:794-815, 2025.

Abstract

How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably the simplest way is to _just show them_—casually capture a human motion video and feed it to humanoids. We introduce **VideoMimic**, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills all from a single policy, conditioned on the environment and global root commands. We hope our data and approach help enable a scalable path towards teaching humanoids to operate in diverse real-world environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-allshire25a, title = {Visual Imitation Enables Contextual Humanoid Control}, author = {Allshire, Arthur and Choi, Hongsuk and Zhang, Junyi and McAllister, David and Zhang, Anthony and Kim, Chung Min and Darrell, Trevor and Abbeel, Pieter and Malik, Jitendra and Kanazawa, Angjoo}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {794--815}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/allshire25a/allshire25a.pdf}, url = {https://proceedings.mlr.press/v305/allshire25a.html}, abstract = {How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably the simplest way is to _just show them_—casually capture a human motion video and feed it to humanoids. We introduce **VideoMimic**, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills all from a single policy, conditioned on the environment and global root commands. We hope our data and approach help enable a scalable path towards teaching humanoids to operate in diverse real-world environments.} }
Endnote
%0 Conference Paper %T Visual Imitation Enables Contextual Humanoid Control %A Arthur Allshire %A Hongsuk Choi %A Junyi Zhang %A David McAllister %A Anthony Zhang %A Chung Min Kim %A Trevor Darrell %A Pieter Abbeel %A Jitendra Malik %A Angjoo Kanazawa %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-allshire25a %I PMLR %P 794--815 %U https://proceedings.mlr.press/v305/allshire25a.html %V 305 %X How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably the simplest way is to _just show them_—casually capture a human motion video and feed it to humanoids. We introduce **VideoMimic**, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills all from a single policy, conditioned on the environment and global root commands. We hope our data and approach help enable a scalable path towards teaching humanoids to operate in diverse real-world environments.
APA
Allshire, A., Choi, H., Zhang, J., McAllister, D., Zhang, A., Kim, C.M., Darrell, T., Abbeel, P., Malik, J. & Kanazawa, A.. (2025). Visual Imitation Enables Contextual Humanoid Control. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:794-815 Available from https://proceedings.mlr.press/v305/allshire25a.html.

Related Material