Characterizing the Gap Between Actor-Critic and Policy Gradient

Junfeng Wen, Saurabh Kumar, Ramki Gummadi, Dale Schuurmans
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11101-11111, 2021.

Abstract

Actor-critic (AC) methods are ubiquitous in reinforcement learning. Although it is understood that AC methods are closely related to policy gradient (PG), their precise connection has not been fully characterized previously. In this paper, we explain the gap between AC and PG methods by identifying the exact adjustment to the AC objective/gradient that recovers the true policy gradient of the cumulative reward objective (PG). Furthermore, by viewing the AC method as a two-player Stackelberg game between the actor and critic, we show that the Stackelberg policy gradient can be recovered as a special case of our more general analysis. Based on these results, we develop practical algorithms, Residual Actor-Critic and Stackelberg Actor-Critic, for estimating the correction between AC and PG and use these to modify the standard AC algorithm. Experiments on popular tabular and continuous environments show the proposed corrections can improve both the sample efficiency and final performance of existing AC methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-wen21b, title = {Characterizing the Gap Between Actor-Critic and Policy Gradient}, author = {Wen, Junfeng and Kumar, Saurabh and Gummadi, Ramki and Schuurmans, Dale}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11101--11111}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/wen21b/wen21b.pdf}, url = {https://proceedings.mlr.press/v139/wen21b.html}, abstract = {Actor-critic (AC) methods are ubiquitous in reinforcement learning. Although it is understood that AC methods are closely related to policy gradient (PG), their precise connection has not been fully characterized previously. In this paper, we explain the gap between AC and PG methods by identifying the exact adjustment to the AC objective/gradient that recovers the true policy gradient of the cumulative reward objective (PG). Furthermore, by viewing the AC method as a two-player Stackelberg game between the actor and critic, we show that the Stackelberg policy gradient can be recovered as a special case of our more general analysis. Based on these results, we develop practical algorithms, Residual Actor-Critic and Stackelberg Actor-Critic, for estimating the correction between AC and PG and use these to modify the standard AC algorithm. Experiments on popular tabular and continuous environments show the proposed corrections can improve both the sample efficiency and final performance of existing AC methods.} }
Endnote
%0 Conference Paper %T Characterizing the Gap Between Actor-Critic and Policy Gradient %A Junfeng Wen %A Saurabh Kumar %A Ramki Gummadi %A Dale Schuurmans %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-wen21b %I PMLR %P 11101--11111 %U https://proceedings.mlr.press/v139/wen21b.html %V 139 %X Actor-critic (AC) methods are ubiquitous in reinforcement learning. Although it is understood that AC methods are closely related to policy gradient (PG), their precise connection has not been fully characterized previously. In this paper, we explain the gap between AC and PG methods by identifying the exact adjustment to the AC objective/gradient that recovers the true policy gradient of the cumulative reward objective (PG). Furthermore, by viewing the AC method as a two-player Stackelberg game between the actor and critic, we show that the Stackelberg policy gradient can be recovered as a special case of our more general analysis. Based on these results, we develop practical algorithms, Residual Actor-Critic and Stackelberg Actor-Critic, for estimating the correction between AC and PG and use these to modify the standard AC algorithm. Experiments on popular tabular and continuous environments show the proposed corrections can improve both the sample efficiency and final performance of existing AC methods.
APA
Wen, J., Kumar, S., Gummadi, R. & Schuurmans, D.. (2021). Characterizing the Gap Between Actor-Critic and Policy Gradient. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11101-11111 Available from https://proceedings.mlr.press/v139/wen21b.html.

Related Material