Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation

Marc Abeille, Alessandro Lazaric
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:23-31, 2020.

Abstract

We study the exploration-exploitation dilemma in the linear quadratic regulator (LQR) setting. Inspired by the extended value iteration algorithm used in optimistic algorithms for finite MDPs, we propose to relax the optimistic optimization of \ofulq and cast it into a constrained \emph{extended} LQR problem, where an additional control variable implicitly selects the system dynamics within a confidence interval. We then move to the corresponding Lagrangian formulation for which we prove strong duality. As a result, we show that an $\epsilon$-optimistic controller can be computed efficiently by solving at most $O\big(\log(1/\epsilon)\big)$ Riccati equations. Finally, we prove that relaxing the original \ofu problem does not impact the learning performance, thus recovering the $\wt O(\sqrt{T})$ regret of \ofulq. To the best of our knowledge, this is the first computationally efficient confidence-based algorithm for LQR with worst-case optimal regret guarantees.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-abeille20a, title = {Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation}, author = {Abeille, Marc and Lazaric, Alessandro}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {23--31}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/abeille20a/abeille20a.pdf}, url = {https://proceedings.mlr.press/v119/abeille20a.html}, abstract = {We study the exploration-exploitation dilemma in the linear quadratic regulator (LQR) setting. Inspired by the extended value iteration algorithm used in optimistic algorithms for finite MDPs, we propose to relax the optimistic optimization of \ofulq and cast it into a constrained \emph{extended} LQR problem, where an additional control variable implicitly selects the system dynamics within a confidence interval. We then move to the corresponding Lagrangian formulation for which we prove strong duality. As a result, we show that an $\epsilon$-optimistic controller can be computed efficiently by solving at most $O\big(\log(1/\epsilon)\big)$ Riccati equations. Finally, we prove that relaxing the original \ofu problem does not impact the learning performance, thus recovering the $\wt O(\sqrt{T})$ regret of \ofulq. To the best of our knowledge, this is the first computationally efficient confidence-based algorithm for LQR with worst-case optimal regret guarantees.} }
Endnote
%0 Conference Paper %T Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation %A Marc Abeille %A Alessandro Lazaric %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-abeille20a %I PMLR %P 23--31 %U https://proceedings.mlr.press/v119/abeille20a.html %V 119 %X We study the exploration-exploitation dilemma in the linear quadratic regulator (LQR) setting. Inspired by the extended value iteration algorithm used in optimistic algorithms for finite MDPs, we propose to relax the optimistic optimization of \ofulq and cast it into a constrained \emph{extended} LQR problem, where an additional control variable implicitly selects the system dynamics within a confidence interval. We then move to the corresponding Lagrangian formulation for which we prove strong duality. As a result, we show that an $\epsilon$-optimistic controller can be computed efficiently by solving at most $O\big(\log(1/\epsilon)\big)$ Riccati equations. Finally, we prove that relaxing the original \ofu problem does not impact the learning performance, thus recovering the $\wt O(\sqrt{T})$ regret of \ofulq. To the best of our knowledge, this is the first computationally efficient confidence-based algorithm for LQR with worst-case optimal regret guarantees.
APA
Abeille, M. & Lazaric, A.. (2020). Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:23-31 Available from https://proceedings.mlr.press/v119/abeille20a.html.

Related Material