A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

Nhan Pham, Lam Nguyen, Dzung Phan, PHUONG HA NGUYEN, Marten Dijk, Quoc Tran-Dinh
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:374-385, 2020.

Abstract

We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estimator is shown to be biased, but has variance reduced property. Using this estimator, we develop a new Proximal Hybrid Stochastic Policy Gradient Algorithm (ProxHSPGA) to solve a composite policy optimization problem that allows us to handle constraints or regularizers on the policy parameters. We first propose a single-looped algorithm then introduce a more practical restarting variant. We prove that both algorithms can achieve the best-known trajectory complexity to attain a first-order stationary point for the composite problem which is better than existing REINFORCE/GPOMDP and SVRPG in the non-composite setting. We evaluate the performance of our algorithm on several well-known examples in reinforcement learning. Numerical results show that our algorithm outperforms two existing methods on these examples. Moreover, the composite settings indeed have some advantages compared to the non-composite ones on certain problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-pham20a, title = {A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning}, author = {Pham, Nhan and Nguyen, Lam and Phan, Dzung and Nguyen, Phuong Ha and van Dijk, Marten and Tran-Dinh, Quoc}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {374--385}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/pham20a/pham20a.pdf}, url = {https://proceedings.mlr.press/v108/pham20a.html}, abstract = {We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estimator is shown to be biased, but has variance reduced property. Using this estimator, we develop a new Proximal Hybrid Stochastic Policy Gradient Algorithm (ProxHSPGA) to solve a composite policy optimization problem that allows us to handle constraints or regularizers on the policy parameters. We first propose a single-looped algorithm then introduce a more practical restarting variant. We prove that both algorithms can achieve the best-known trajectory complexity to attain a first-order stationary point for the composite problem which is better than existing REINFORCE/GPOMDP and SVRPG in the non-composite setting. We evaluate the performance of our algorithm on several well-known examples in reinforcement learning. Numerical results show that our algorithm outperforms two existing methods on these examples. Moreover, the composite settings indeed have some advantages compared to the non-composite ones on certain problems.} }
Endnote
%0 Conference Paper %T A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning %A Nhan Pham %A Lam Nguyen %A Dzung Phan %A PHUONG HA NGUYEN %A Marten Dijk %A Quoc Tran-Dinh %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-pham20a %I PMLR %P 374--385 %U https://proceedings.mlr.press/v108/pham20a.html %V 108 %X We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization. The hybrid policy gradient estimator is shown to be biased, but has variance reduced property. Using this estimator, we develop a new Proximal Hybrid Stochastic Policy Gradient Algorithm (ProxHSPGA) to solve a composite policy optimization problem that allows us to handle constraints or regularizers on the policy parameters. We first propose a single-looped algorithm then introduce a more practical restarting variant. We prove that both algorithms can achieve the best-known trajectory complexity to attain a first-order stationary point for the composite problem which is better than existing REINFORCE/GPOMDP and SVRPG in the non-composite setting. We evaluate the performance of our algorithm on several well-known examples in reinforcement learning. Numerical results show that our algorithm outperforms two existing methods on these examples. Moreover, the composite settings indeed have some advantages compared to the non-composite ones on certain problems.
APA
Pham, N., Nguyen, L., Phan, D., NGUYEN, P.H., Dijk, M. & Tran-Dinh, Q.. (2020). A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:374-385 Available from https://proceedings.mlr.press/v108/pham20a.html.

Related Material