Gradient Temporal-Difference Learning with Regularized Corrections

Sina Ghiassian, Andrew Patterson, Shivam Garg, Dhawal Gupta, Adam White, Martha White
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3524-3534, 2020.

Abstract

It is still common to use Q-learning and temporal difference (TD) learning{—}even though they have divergence issues and sound Gradient TD alternatives exist{—}because divergence seems rare and they typically perform well. However, recent work with large neural network learning systems reveals that instability is more common than previously thought. Practitioners face a difficult dilemma: choose an easy to use and performant TD method, or a more complex algorithm that is more sound but harder to tune and all but unexplored with non-linear function approximation or control. In this paper, we introduce a new method called TD with Regularized Corrections (TDRC), that attempts to balance ease of use, soundness, and performance. It behaves as well as TD, when TD performs well, but is sound in cases where TD diverges. We empirically investigate TDRC across a range of problems, for both prediction and control, and for both linear and non-linear function approximation, and show, potentially for the first time, that Gradient TD methods could be a better alternative to TD and Q-learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ghiassian20a, title = {Gradient Temporal-Difference Learning with Regularized Corrections}, author = {Ghiassian, Sina and Patterson, Andrew and Garg, Shivam and Gupta, Dhawal and White, Adam and White, Martha}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3524--3534}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ghiassian20a/ghiassian20a.pdf}, url = {https://proceedings.mlr.press/v119/ghiassian20a.html}, abstract = {It is still common to use Q-learning and temporal difference (TD) learning{—}even though they have divergence issues and sound Gradient TD alternatives exist{—}because divergence seems rare and they typically perform well. However, recent work with large neural network learning systems reveals that instability is more common than previously thought. Practitioners face a difficult dilemma: choose an easy to use and performant TD method, or a more complex algorithm that is more sound but harder to tune and all but unexplored with non-linear function approximation or control. In this paper, we introduce a new method called TD with Regularized Corrections (TDRC), that attempts to balance ease of use, soundness, and performance. It behaves as well as TD, when TD performs well, but is sound in cases where TD diverges. We empirically investigate TDRC across a range of problems, for both prediction and control, and for both linear and non-linear function approximation, and show, potentially for the first time, that Gradient TD methods could be a better alternative to TD and Q-learning.} }
Endnote
%0 Conference Paper %T Gradient Temporal-Difference Learning with Regularized Corrections %A Sina Ghiassian %A Andrew Patterson %A Shivam Garg %A Dhawal Gupta %A Adam White %A Martha White %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ghiassian20a %I PMLR %P 3524--3534 %U https://proceedings.mlr.press/v119/ghiassian20a.html %V 119 %X It is still common to use Q-learning and temporal difference (TD) learning{—}even though they have divergence issues and sound Gradient TD alternatives exist{—}because divergence seems rare and they typically perform well. However, recent work with large neural network learning systems reveals that instability is more common than previously thought. Practitioners face a difficult dilemma: choose an easy to use and performant TD method, or a more complex algorithm that is more sound but harder to tune and all but unexplored with non-linear function approximation or control. In this paper, we introduce a new method called TD with Regularized Corrections (TDRC), that attempts to balance ease of use, soundness, and performance. It behaves as well as TD, when TD performs well, but is sound in cases where TD diverges. We empirically investigate TDRC across a range of problems, for both prediction and control, and for both linear and non-linear function approximation, and show, potentially for the first time, that Gradient TD methods could be a better alternative to TD and Q-learning.
APA
Ghiassian, S., Patterson, A., Garg, S., Gupta, D., White, A. & White, M.. (2020). Gradient Temporal-Difference Learning with Regularized Corrections. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3524-3534 Available from https://proceedings.mlr.press/v119/ghiassian20a.html.

Related Material