Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation

Shangtong Zhang, Bo Liu, Hengshuai Yao, Shimon Whiteson
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11204-11213, 2020.

Abstract

We present the first provably convergent two-timescale off-policy actor-critic algorithm (COF-PAC) with function approximation. Key to COF-PAC is the introduction of a new critic, the emphasis critic, which is trained via Gradient Emphasis Learning (GEM), a novel combination of the key ideas of Gradient Temporal Difference Learning and Emphatic Temporal Difference Learning. With the help of the emphasis critic and the canonical value function critic, we show convergence for COF-PAC, where the critics are linear and the actor can be nonlinear.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20s, title = {Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation}, author = {Zhang, Shangtong and Liu, Bo and Yao, Hengshuai and Whiteson, Shimon}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11204--11213}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20s/zhang20s.pdf}, url = {https://proceedings.mlr.press/v119/zhang20s.html}, abstract = {We present the first provably convergent two-timescale off-policy actor-critic algorithm (COF-PAC) with function approximation. Key to COF-PAC is the introduction of a new critic, the emphasis critic, which is trained via Gradient Emphasis Learning (GEM), a novel combination of the key ideas of Gradient Temporal Difference Learning and Emphatic Temporal Difference Learning. With the help of the emphasis critic and the canonical value function critic, we show convergence for COF-PAC, where the critics are linear and the actor can be nonlinear.} }
Endnote
%0 Conference Paper %T Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation %A Shangtong Zhang %A Bo Liu %A Hengshuai Yao %A Shimon Whiteson %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20s %I PMLR %P 11204--11213 %U https://proceedings.mlr.press/v119/zhang20s.html %V 119 %X We present the first provably convergent two-timescale off-policy actor-critic algorithm (COF-PAC) with function approximation. Key to COF-PAC is the introduction of a new critic, the emphasis critic, which is trained via Gradient Emphasis Learning (GEM), a novel combination of the key ideas of Gradient Temporal Difference Learning and Emphatic Temporal Difference Learning. With the help of the emphasis critic and the canonical value function critic, we show convergence for COF-PAC, where the critics are linear and the actor can be nonlinear.
APA
Zhang, S., Liu, B., Yao, H. & Whiteson, S.. (2020). Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11204-11213 Available from https://proceedings.mlr.press/v119/zhang20s.html.

Related Material