Breaking the Deadly Triad with a Target Network

Shangtong Zhang, Hengshuai Yao, Shimon Whiteson
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12621-12631, 2021.

Abstract

The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear $Q$-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-zhang21y, title = {Breaking the Deadly Triad with a Target Network}, author = {Zhang, Shangtong and Yao, Hengshuai and Whiteson, Shimon}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {12621--12631}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/zhang21y/zhang21y.pdf}, url = {https://proceedings.mlr.press/v139/zhang21y.html}, abstract = {The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear $Q$-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.} }
Endnote
%0 Conference Paper %T Breaking the Deadly Triad with a Target Network %A Shangtong Zhang %A Hengshuai Yao %A Shimon Whiteson %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-zhang21y %I PMLR %P 12621--12631 %U https://proceedings.mlr.press/v139/zhang21y.html %V 139 %X The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear $Q$-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.
APA
Zhang, S., Yao, H. & Whiteson, S.. (2021). Breaking the Deadly Triad with a Target Network. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:12621-12631 Available from https://proceedings.mlr.press/v139/zhang21y.html.

Related Material