Approximate Midpoint Policy Iteration for Linear Quadratic Control

Benjamin Gravell, Iman Shames, Tyler Summers
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1080-1092, 2021.

Abstract

We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings. The algorithm is a variation of Newton’s method, and we show that in the model-based setting it achieves cubic convergence, which is superior to standard policy iteration and policy gradient algorithms that achieve quadratic and linear convergence, respectively. We also demonstrate that the algorithm can be approximately implemented without knowledge of the dynamics model by using least-squares estimates of the state-action value function from trajectory data, from which policy improvements can be obtained. With sufficient trajectory data, the policy iterates converge cubically to approximately optimal policies, and this occurs with the same available sample budget as the approximate standard policy iteration. Numerical experiments demonstrate effectiveness of the proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-gravell21a, title = {Approximate Midpoint Policy Iteration for Linear Quadratic Control}, author = {Gravell, Benjamin and Shames, Iman and Summers, Tyler}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {1080--1092}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/gravell21a/gravell21a.pdf}, url = {https://proceedings.mlr.press/v144/gravell21a.html}, abstract = {We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings. The algorithm is a variation of Newton’s method, and we show that in the model-based setting it achieves cubic convergence, which is superior to standard policy iteration and policy gradient algorithms that achieve quadratic and linear convergence, respectively. We also demonstrate that the algorithm can be approximately implemented without knowledge of the dynamics model by using least-squares estimates of the state-action value function from trajectory data, from which policy improvements can be obtained. With sufficient trajectory data, the policy iterates converge cubically to approximately optimal policies, and this occurs with the same available sample budget as the approximate standard policy iteration. Numerical experiments demonstrate effectiveness of the proposed algorithms.} }
Endnote
%0 Conference Paper %T Approximate Midpoint Policy Iteration for Linear Quadratic Control %A Benjamin Gravell %A Iman Shames %A Tyler Summers %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-gravell21a %I PMLR %P 1080--1092 %U https://proceedings.mlr.press/v144/gravell21a.html %V 144 %X We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings. The algorithm is a variation of Newton’s method, and we show that in the model-based setting it achieves cubic convergence, which is superior to standard policy iteration and policy gradient algorithms that achieve quadratic and linear convergence, respectively. We also demonstrate that the algorithm can be approximately implemented without knowledge of the dynamics model by using least-squares estimates of the state-action value function from trajectory data, from which policy improvements can be obtained. With sufficient trajectory data, the policy iterates converge cubically to approximately optimal policies, and this occurs with the same available sample budget as the approximate standard policy iteration. Numerical experiments demonstrate effectiveness of the proposed algorithms.
APA
Gravell, B., Shames, I. & Summers, T.. (2021). Approximate Midpoint Policy Iteration for Linear Quadratic Control. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:1080-1092 Available from https://proceedings.mlr.press/v144/gravell21a.html.

Related Material