Data Efficient Reinforcement Learning for Legged Robots

Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani
Proceedings of the Conference on Robot Learning, PMLR 100:1-10, 2020.

Abstract

We present a model-based reinforcement learning framework for robot locomotion that achieves walking based on only 4.5 minutes of data collected on a quadruped robot. To accurately model the robot’s dynamics over a long horizon, we introduce a loss function that tracks the model’s prediction over multiple timesteps. We adapt model predictive control to account for planning latency, which allows the learned model to be used for real time control. Additionally, to ensure safe exploration during model learning, we embed prior knowledge of leg trajectories into the action space. The resulting system achieves fast and robust locomotion. Unlike model-free methods, which optimize for a particular task, our planner can use the same learned dynamics for various tasks, simply by changing the reward function.1 To the best of our knowledge, our approach is more than an order of magnitude more sample efficient than current model-free methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-yang20a, title = {Data Efficient Reinforcement Learning for Legged Robots}, author = {Yang, Yuxiang and Caluwaerts, Ken and Iscen, Atil and Zhang, Tingnan and Tan, Jie and Sindhwani, Vikas}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {1--10}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/yang20a/yang20a.pdf}, url = {https://proceedings.mlr.press/v100/yang20a.html}, abstract = {We present a model-based reinforcement learning framework for robot locomotion that achieves walking based on only 4.5 minutes of data collected on a quadruped robot. To accurately model the robot’s dynamics over a long horizon, we introduce a loss function that tracks the model’s prediction over multiple timesteps. We adapt model predictive control to account for planning latency, which allows the learned model to be used for real time control. Additionally, to ensure safe exploration during model learning, we embed prior knowledge of leg trajectories into the action space. The resulting system achieves fast and robust locomotion. Unlike model-free methods, which optimize for a particular task, our planner can use the same learned dynamics for various tasks, simply by changing the reward function.1 To the best of our knowledge, our approach is more than an order of magnitude more sample efficient than current model-free methods.} }
Endnote
%0 Conference Paper %T Data Efficient Reinforcement Learning for Legged Robots %A Yuxiang Yang %A Ken Caluwaerts %A Atil Iscen %A Tingnan Zhang %A Jie Tan %A Vikas Sindhwani %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-yang20a %I PMLR %P 1--10 %U https://proceedings.mlr.press/v100/yang20a.html %V 100 %X We present a model-based reinforcement learning framework for robot locomotion that achieves walking based on only 4.5 minutes of data collected on a quadruped robot. To accurately model the robot’s dynamics over a long horizon, we introduce a loss function that tracks the model’s prediction over multiple timesteps. We adapt model predictive control to account for planning latency, which allows the learned model to be used for real time control. Additionally, to ensure safe exploration during model learning, we embed prior knowledge of leg trajectories into the action space. The resulting system achieves fast and robust locomotion. Unlike model-free methods, which optimize for a particular task, our planner can use the same learned dynamics for various tasks, simply by changing the reward function.1 To the best of our knowledge, our approach is more than an order of magnitude more sample efficient than current model-free methods.
APA
Yang, Y., Caluwaerts, K., Iscen, A., Zhang, T., Tan, J. & Sindhwani, V.. (2020). Data Efficient Reinforcement Learning for Legged Robots. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:1-10 Available from https://proceedings.mlr.press/v100/yang20a.html.

Related Material