Reinforcement Learning with Fast Stabilization in Linear Dynamical Systems

Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Animashree Anandkumar
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:5354-5390, 2022.

Abstract

In this work, we study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems. When learning a dynamical system, one needs to stabilize the unknown dynamics in order to avoid system blow-ups. We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment with an improved exploration strategy. We show that the proposed algorithm attains $\Tilde{\mathcal{O}}(\sqrt{T})$ regret after $T$ time steps of agent-environment interaction. We also show that the regret of the proposed algorithm has only a polynomial dependence in the problem dimensions, which gives an exponential improvement over the prior methods. Our improved exploration method is simple, yet efficient, and it combines a sophisticated exploration policy in RL with an isotropic exploration strategy to achieve fast stabilization and improved regret. We empirically demonstrate that the proposed algorithm outperforms other popular methods in several adaptive control tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-lale22a, title = { Reinforcement Learning with Fast Stabilization in Linear Dynamical Systems }, author = {Lale, Sahin and Azizzadenesheli, Kamyar and Hassibi, Babak and Anandkumar, Animashree}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {5354--5390}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/lale22a/lale22a.pdf}, url = {https://proceedings.mlr.press/v151/lale22a.html}, abstract = { In this work, we study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems. When learning a dynamical system, one needs to stabilize the unknown dynamics in order to avoid system blow-ups. We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment with an improved exploration strategy. We show that the proposed algorithm attains $\Tilde{\mathcal{O}}(\sqrt{T})$ regret after $T$ time steps of agent-environment interaction. We also show that the regret of the proposed algorithm has only a polynomial dependence in the problem dimensions, which gives an exponential improvement over the prior methods. Our improved exploration method is simple, yet efficient, and it combines a sophisticated exploration policy in RL with an isotropic exploration strategy to achieve fast stabilization and improved regret. We empirically demonstrate that the proposed algorithm outperforms other popular methods in several adaptive control tasks. } }
Endnote
%0 Conference Paper %T Reinforcement Learning with Fast Stabilization in Linear Dynamical Systems %A Sahin Lale %A Kamyar Azizzadenesheli %A Babak Hassibi %A Animashree Anandkumar %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-lale22a %I PMLR %P 5354--5390 %U https://proceedings.mlr.press/v151/lale22a.html %V 151 %X In this work, we study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems. When learning a dynamical system, one needs to stabilize the unknown dynamics in order to avoid system blow-ups. We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment with an improved exploration strategy. We show that the proposed algorithm attains $\Tilde{\mathcal{O}}(\sqrt{T})$ regret after $T$ time steps of agent-environment interaction. We also show that the regret of the proposed algorithm has only a polynomial dependence in the problem dimensions, which gives an exponential improvement over the prior methods. Our improved exploration method is simple, yet efficient, and it combines a sophisticated exploration policy in RL with an isotropic exploration strategy to achieve fast stabilization and improved regret. We empirically demonstrate that the proposed algorithm outperforms other popular methods in several adaptive control tasks.
APA
Lale, S., Azizzadenesheli, K., Hassibi, B. & Anandkumar, A.. (2022). Reinforcement Learning with Fast Stabilization in Linear Dynamical Systems . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:5354-5390 Available from https://proceedings.mlr.press/v151/lale22a.html.

Related Material