Learning to Walk in the Real World with Minimal Human Effort

Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1110-1120, 2021.

Abstract

Reliable and stable locomotion has been one of the most fundamental challenges for legged robots. Deep reinforcement learning (deep RL) has emerged as a promising method for developing such control policies autonomously. In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort. The key difficulties for on-robot learning systems are automatic data collection and safety. We overcome these two challenges by developing a multi-task learning procedure and a safety-constrained RL framework. We tested our system on the task of learning to walk on three different terrains: flat ground, a soft mattress, and a doormat with crevices. Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-ha21c, title = {Learning to Walk in the Real World with Minimal Human Effort}, author = {Ha, Sehoon and Xu, Peng and Tan, Zhenyu and Levine, Sergey and Tan, Jie}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1110--1120}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/ha21c/ha21c.pdf}, url = {https://proceedings.mlr.press/v155/ha21c.html}, abstract = {Reliable and stable locomotion has been one of the most fundamental challenges for legged robots. Deep reinforcement learning (deep RL) has emerged as a promising method for developing such control policies autonomously. In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort. The key difficulties for on-robot learning systems are automatic data collection and safety. We overcome these two challenges by developing a multi-task learning procedure and a safety-constrained RL framework. We tested our system on the task of learning to walk on three different terrains: flat ground, a soft mattress, and a doormat with crevices. Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.} }
Endnote
%0 Conference Paper %T Learning to Walk in the Real World with Minimal Human Effort %A Sehoon Ha %A Peng Xu %A Zhenyu Tan %A Sergey Levine %A Jie Tan %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-ha21c %I PMLR %P 1110--1120 %U https://proceedings.mlr.press/v155/ha21c.html %V 155 %X Reliable and stable locomotion has been one of the most fundamental challenges for legged robots. Deep reinforcement learning (deep RL) has emerged as a promising method for developing such control policies autonomously. In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort. The key difficulties for on-robot learning systems are automatic data collection and safety. We overcome these two challenges by developing a multi-task learning procedure and a safety-constrained RL framework. We tested our system on the task of learning to walk on three different terrains: flat ground, a soft mattress, and a doormat with crevices. Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
APA
Ha, S., Xu, P., Tan, Z., Levine, S. & Tan, J.. (2021). Learning to Walk in the Real World with Minimal Human Effort. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1110-1120 Available from https://proceedings.mlr.press/v155/ha21c.html.

Related Material