Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator

Maryam Fazel, Rong Ge, Sham Kakade, Mehran Mesbahi
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1467-1476, 2018.

Abstract

Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model, 2) they are an “end-to-end” approach, directly optimizing the performance metric of interest, 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-fazel18a, title = {Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator}, author = {Fazel, Maryam and Ge, Rong and Kakade, Sham and Mesbahi, Mehran}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1467--1476}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/fazel18a/fazel18a.pdf}, url = {https://proceedings.mlr.press/v80/fazel18a.html}, abstract = {Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model, 2) they are an “end-to-end” approach, directly optimizing the performance metric of interest, 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.} }
Endnote
%0 Conference Paper %T Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator %A Maryam Fazel %A Rong Ge %A Sham Kakade %A Mehran Mesbahi %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-fazel18a %I PMLR %P 1467--1476 %U https://proceedings.mlr.press/v80/fazel18a.html %V 80 %X Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model, 2) they are an “end-to-end” approach, directly optimizing the performance metric of interest, 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.
APA
Fazel, M., Ge, R., Kakade, S. & Mesbahi, M.. (2018). Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1467-1476 Available from https://proceedings.mlr.press/v80/fazel18a.html.

Related Material