Robot Parkour Learning

Ziwen Zhuang, Zipeng Fu, Jianren Wang, Christopher G. Atkeson, Sören Schwertfeger, Chelsea Finn, Hang Zhao
Proceedings of The 7th Conference on Robot Learning, PMLR 229:73-92, 2023.

Abstract

Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly in complex environments. Existing methods can generate either diverse but blind locomotion skills or vision-based but specialized skills by using reference animal data or complex rewards. However, autonomous parkour requires robots to learn generalizable skills that are both vision-based and diverse to perceive and react to various scenarios. In this work, we propose a system for learning a single end-to-end vision-based parkour policy of diverse parkour skills using a simple reward without any reference motion data. We develop a reinforcement learning method inspired by direct collocation to generate parkour skills, including climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera. We demonstrate that our system can empower low-cost quadrupedal robots to autonomously select and execute appropriate parkour skills to traverse challenging environments in the real world. Project website: https://robot-parkour.github.io/

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-zhuang23a, title = {Robot Parkour Learning}, author = {Zhuang, Ziwen and Fu, Zipeng and Wang, Jianren and Atkeson, Christopher G. and Schwertfeger, S\"{o}ren and Finn, Chelsea and Zhao, Hang}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {73--92}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/zhuang23a/zhuang23a.pdf}, url = {https://proceedings.mlr.press/v229/zhuang23a.html}, abstract = {Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly in complex environments. Existing methods can generate either diverse but blind locomotion skills or vision-based but specialized skills by using reference animal data or complex rewards. However, autonomous parkour requires robots to learn generalizable skills that are both vision-based and diverse to perceive and react to various scenarios. In this work, we propose a system for learning a single end-to-end vision-based parkour policy of diverse parkour skills using a simple reward without any reference motion data. We develop a reinforcement learning method inspired by direct collocation to generate parkour skills, including climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera. We demonstrate that our system can empower low-cost quadrupedal robots to autonomously select and execute appropriate parkour skills to traverse challenging environments in the real world. Project website: https://robot-parkour.github.io/} }
Endnote
%0 Conference Paper %T Robot Parkour Learning %A Ziwen Zhuang %A Zipeng Fu %A Jianren Wang %A Christopher G. Atkeson %A Sören Schwertfeger %A Chelsea Finn %A Hang Zhao %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-zhuang23a %I PMLR %P 73--92 %U https://proceedings.mlr.press/v229/zhuang23a.html %V 229 %X Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly in complex environments. Existing methods can generate either diverse but blind locomotion skills or vision-based but specialized skills by using reference animal data or complex rewards. However, autonomous parkour requires robots to learn generalizable skills that are both vision-based and diverse to perceive and react to various scenarios. In this work, we propose a system for learning a single end-to-end vision-based parkour policy of diverse parkour skills using a simple reward without any reference motion data. We develop a reinforcement learning method inspired by direct collocation to generate parkour skills, including climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezing through thin slits, and running. We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera. We demonstrate that our system can empower low-cost quadrupedal robots to autonomously select and execute appropriate parkour skills to traverse challenging environments in the real world. Project website: https://robot-parkour.github.io/
APA
Zhuang, Z., Fu, Z., Wang, J., Atkeson, C.G., Schwertfeger, S., Finn, C. & Zhao, H.. (2023). Robot Parkour Learning. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:73-92 Available from https://proceedings.mlr.press/v229/zhuang23a.html.

Related Material