Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning

Tyler Westenbroek, Fernando Castaneda, Ayush Agrawal, Shankar Sastry, Koushil Sreenath
Proceedings of The 6th Conference on Robot Learning, PMLR 205:2125-2135, 2023.

Abstract

Recent advances in the reinforcement learning (RL) literature have enabled roboticists to automatically train complex policies in simulated environments. However, due to the poor sample complexity of these methods, solving RL problems using real-world data remains a challenging problem. This paper introduces a novel cost-shaping method which aims to reduce the number of samples needed to learn a stabilizing controller. The method adds a term involving a Control Lyapunov Function (CLF) – an ‘energy-like’ function from the model-based control literature – to typical cost formulations. Theoretical results demonstrate the new costs lead to stabilizing controllers when smaller discount factors are used, which is well-known to reduce sample complexity. Moreover, the addition of the CLF term ‘robustifies’ the search for a stabilizing controller by ensuring that even highly sub-optimal polices will stabilize the system. We demonstrate our approach with two hardware examples where we learn stabilizing controllers for a cartpole and an A1 quadruped with only seconds and a few minutes of fine-tuning data, respectively. Furthermore, simulation benchmark studies show that obtaining stabilizing policies by optimizing our proposed costs requires orders of magnitude less data compared to standard cost designs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-westenbroek23a, title = {Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning}, author = {Westenbroek, Tyler and Castaneda, Fernando and Agrawal, Ayush and Sastry, Shankar and Sreenath, Koushil}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {2125--2135}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/westenbroek23a/westenbroek23a.pdf}, url = {https://proceedings.mlr.press/v205/westenbroek23a.html}, abstract = {Recent advances in the reinforcement learning (RL) literature have enabled roboticists to automatically train complex policies in simulated environments. However, due to the poor sample complexity of these methods, solving RL problems using real-world data remains a challenging problem. This paper introduces a novel cost-shaping method which aims to reduce the number of samples needed to learn a stabilizing controller. The method adds a term involving a Control Lyapunov Function (CLF) – an ‘energy-like’ function from the model-based control literature – to typical cost formulations. Theoretical results demonstrate the new costs lead to stabilizing controllers when smaller discount factors are used, which is well-known to reduce sample complexity. Moreover, the addition of the CLF term ‘robustifies’ the search for a stabilizing controller by ensuring that even highly sub-optimal polices will stabilize the system. We demonstrate our approach with two hardware examples where we learn stabilizing controllers for a cartpole and an A1 quadruped with only seconds and a few minutes of fine-tuning data, respectively. Furthermore, simulation benchmark studies show that obtaining stabilizing policies by optimizing our proposed costs requires orders of magnitude less data compared to standard cost designs.} }
Endnote
%0 Conference Paper %T Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning %A Tyler Westenbroek %A Fernando Castaneda %A Ayush Agrawal %A Shankar Sastry %A Koushil Sreenath %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-westenbroek23a %I PMLR %P 2125--2135 %U https://proceedings.mlr.press/v205/westenbroek23a.html %V 205 %X Recent advances in the reinforcement learning (RL) literature have enabled roboticists to automatically train complex policies in simulated environments. However, due to the poor sample complexity of these methods, solving RL problems using real-world data remains a challenging problem. This paper introduces a novel cost-shaping method which aims to reduce the number of samples needed to learn a stabilizing controller. The method adds a term involving a Control Lyapunov Function (CLF) – an ‘energy-like’ function from the model-based control literature – to typical cost formulations. Theoretical results demonstrate the new costs lead to stabilizing controllers when smaller discount factors are used, which is well-known to reduce sample complexity. Moreover, the addition of the CLF term ‘robustifies’ the search for a stabilizing controller by ensuring that even highly sub-optimal polices will stabilize the system. We demonstrate our approach with two hardware examples where we learn stabilizing controllers for a cartpole and an A1 quadruped with only seconds and a few minutes of fine-tuning data, respectively. Furthermore, simulation benchmark studies show that obtaining stabilizing policies by optimizing our proposed costs requires orders of magnitude less data compared to standard cost designs.
APA
Westenbroek, T., Castaneda, F., Agrawal, A., Sastry, S. & Sreenath, K.. (2023). Lyapunov Design for Robust and Efficient Robotic Reinforcement Learning. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:2125-2135 Available from https://proceedings.mlr.press/v205/westenbroek23a.html.

Related Material