Learning Semantics-Aware Locomotion Skills from Human Demonstration

Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots
Proceedings of The 6th Conference on Robot Learning, PMLR 205:2205-2214, 2023.

Abstract

The semantics of the environment, such as the terrain type and property, reveals important information for legged robots to adjust their behaviors. In this work, we present a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots, such that the robot can traverse through complex offroad terrains with appropriate speeds and gaits using perception information. Due to the lack of high-fidelity outdoor simulation, our framework needs to be trained directly in the real world, which brings unique challenges in data efficiency and safety. To ensure sample efficiency, we pre-train the perception model with an off-road driving dataset. To avoid the risks of real-world policy exploration, we leverage human demonstration to train a speed policy that selects a desired forward speed from camera image. For maximum traversability, we pair the speed policy with a gait selector, which selects a robust locomotion gait for each forward speed. Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-yang23a, title = {Learning Semantics-Aware Locomotion Skills from Human Demonstration}, author = {Yang, Yuxiang and Meng, Xiangyun and Yu, Wenhao and Zhang, Tingnan and Tan, Jie and Boots, Byron}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {2205--2214}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/yang23a/yang23a.pdf}, url = {https://proceedings.mlr.press/v205/yang23a.html}, abstract = {The semantics of the environment, such as the terrain type and property, reveals important information for legged robots to adjust their behaviors. In this work, we present a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots, such that the robot can traverse through complex offroad terrains with appropriate speeds and gaits using perception information. Due to the lack of high-fidelity outdoor simulation, our framework needs to be trained directly in the real world, which brings unique challenges in data efficiency and safety. To ensure sample efficiency, we pre-train the perception model with an off-road driving dataset. To avoid the risks of real-world policy exploration, we leverage human demonstration to train a speed policy that selects a desired forward speed from camera image. For maximum traversability, we pair the speed policy with a gait selector, which selects a robust locomotion gait for each forward speed. Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed} }
Endnote
%0 Conference Paper %T Learning Semantics-Aware Locomotion Skills from Human Demonstration %A Yuxiang Yang %A Xiangyun Meng %A Wenhao Yu %A Tingnan Zhang %A Jie Tan %A Byron Boots %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-yang23a %I PMLR %P 2205--2214 %U https://proceedings.mlr.press/v205/yang23a.html %V 205 %X The semantics of the environment, such as the terrain type and property, reveals important information for legged robots to adjust their behaviors. In this work, we present a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots, such that the robot can traverse through complex offroad terrains with appropriate speeds and gaits using perception information. Due to the lack of high-fidelity outdoor simulation, our framework needs to be trained directly in the real world, which brings unique challenges in data efficiency and safety. To ensure sample efficiency, we pre-train the perception model with an off-road driving dataset. To avoid the risks of real-world policy exploration, we leverage human demonstration to train a speed policy that selects a desired forward speed from camera image. For maximum traversability, we pair the speed policy with a gait selector, which selects a robust locomotion gait for each forward speed. Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed
APA
Yang, Y., Meng, X., Yu, W., Zhang, T., Tan, J. & Boots, B.. (2023). Learning Semantics-Aware Locomotion Skills from Human Demonstration. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:2205-2214 Available from https://proceedings.mlr.press/v205/yang23a.html.

Related Material