On the Global Optimum Convergence of Momentum-based Policy Gradient

Yuhao Ding, Junzi Zhang, Javad Lavaei
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:1910-1934, 2022.

Abstract

Policy gradient (PG) methods are popular and efficient for large-scale reinforcement learning due to their relative stability and incremental nature. In recent years, the empirical success of PG methods has led to the development of a theoretical foundation for these methods. In this work, we generalize this line of research by establishing the first set of global convergence results of stochastic PG methods with momentum terms, which have been demonstrated to be efficient recipes for improving PG methods. We study both the soft-max and the Fisher-non-degenerate policy parametrizations, and show that adding a momentum term improves the global optimality sample complexities of vanilla PG methods by $\tilde{\mathcal{O}}(\epsilon^{-1.5})$ and $\tilde{\mathcal{O}}(\epsilon^{-1})$, respectively, where $\epsilon>0$ is the target tolerance. Our results for the generic Fisher-non-degenerate policy parametrizations also provide the first single-loop and finite-batch PG algorithm achieving an $\tilde{O}(\epsilon^{-3})$ global optimality sample complexity. Finally, as a by-product, our analyses provide general tools for deriving the global convergence rates of stochastic PG methods, which can be readily applied and extended to other PG estimators under the two parametrizations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-ding22a, title = { On the Global Optimum Convergence of Momentum-based Policy Gradient }, author = {Ding, Yuhao and Zhang, Junzi and Lavaei, Javad}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {1910--1934}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/ding22a/ding22a.pdf}, url = {https://proceedings.mlr.press/v151/ding22a.html}, abstract = { Policy gradient (PG) methods are popular and efficient for large-scale reinforcement learning due to their relative stability and incremental nature. In recent years, the empirical success of PG methods has led to the development of a theoretical foundation for these methods. In this work, we generalize this line of research by establishing the first set of global convergence results of stochastic PG methods with momentum terms, which have been demonstrated to be efficient recipes for improving PG methods. We study both the soft-max and the Fisher-non-degenerate policy parametrizations, and show that adding a momentum term improves the global optimality sample complexities of vanilla PG methods by $\tilde{\mathcal{O}}(\epsilon^{-1.5})$ and $\tilde{\mathcal{O}}(\epsilon^{-1})$, respectively, where $\epsilon>0$ is the target tolerance. Our results for the generic Fisher-non-degenerate policy parametrizations also provide the first single-loop and finite-batch PG algorithm achieving an $\tilde{O}(\epsilon^{-3})$ global optimality sample complexity. Finally, as a by-product, our analyses provide general tools for deriving the global convergence rates of stochastic PG methods, which can be readily applied and extended to other PG estimators under the two parametrizations. } }
Endnote
%0 Conference Paper %T On the Global Optimum Convergence of Momentum-based Policy Gradient %A Yuhao Ding %A Junzi Zhang %A Javad Lavaei %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-ding22a %I PMLR %P 1910--1934 %U https://proceedings.mlr.press/v151/ding22a.html %V 151 %X Policy gradient (PG) methods are popular and efficient for large-scale reinforcement learning due to their relative stability and incremental nature. In recent years, the empirical success of PG methods has led to the development of a theoretical foundation for these methods. In this work, we generalize this line of research by establishing the first set of global convergence results of stochastic PG methods with momentum terms, which have been demonstrated to be efficient recipes for improving PG methods. We study both the soft-max and the Fisher-non-degenerate policy parametrizations, and show that adding a momentum term improves the global optimality sample complexities of vanilla PG methods by $\tilde{\mathcal{O}}(\epsilon^{-1.5})$ and $\tilde{\mathcal{O}}(\epsilon^{-1})$, respectively, where $\epsilon>0$ is the target tolerance. Our results for the generic Fisher-non-degenerate policy parametrizations also provide the first single-loop and finite-batch PG algorithm achieving an $\tilde{O}(\epsilon^{-3})$ global optimality sample complexity. Finally, as a by-product, our analyses provide general tools for deriving the global convergence rates of stochastic PG methods, which can be readily applied and extended to other PG estimators under the two parametrizations.
APA
Ding, Y., Zhang, J. & Lavaei, J.. (2022). On the Global Optimum Convergence of Momentum-based Policy Gradient . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:1910-1934 Available from https://proceedings.mlr.press/v151/ding22a.html.

Related Material