On Convergence of Emphatic Temporal-Difference Learning

H. Yu
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1724-1751, 2015.

Abstract

We consider emphatic temporal-difference learning algorithms for policy evaluation in discounted Markov decision processes with finite spaces. Such algorithms were recently proposed by Sutton, Mahmood, and White (2015) as an improved solution to the problem of divergence of off-policy temporal-difference learning with linear function approximation. We present in this paper the first convergence proofs for two emphatic algorithms, ETD(λ) and ELSTD(λ). We prove, under general off-policy conditions, the convergence in L^1 for ELSTD(λ) iterates, and the almost sure convergence of the approximate value functions calculated by both algorithms using a single infinitely long trajectory. Our analysis involves new techniques with applications beyond emphatic algorithms leading, for example, to the first proof that standard TD(λ) also converges under off-policy training for λsufficiently large.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Yu15, title = {On Convergence of Emphatic Temporal-Difference Learning}, author = {Yu, H.}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1724--1751}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Yu15.pdf}, url = {https://proceedings.mlr.press/v40/Yu15.html}, abstract = {We consider emphatic temporal-difference learning algorithms for policy evaluation in discounted Markov decision processes with finite spaces. Such algorithms were recently proposed by Sutton, Mahmood, and White (2015) as an improved solution to the problem of divergence of off-policy temporal-difference learning with linear function approximation. We present in this paper the first convergence proofs for two emphatic algorithms, ETD(λ) and ELSTD(λ). We prove, under general off-policy conditions, the convergence in L^1 for ELSTD(λ) iterates, and the almost sure convergence of the approximate value functions calculated by both algorithms using a single infinitely long trajectory. Our analysis involves new techniques with applications beyond emphatic algorithms leading, for example, to the first proof that standard TD(λ) also converges under off-policy training for λsufficiently large.} }
Endnote
%0 Conference Paper %T On Convergence of Emphatic Temporal-Difference Learning %A H. Yu %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Yu15 %I PMLR %P 1724--1751 %U https://proceedings.mlr.press/v40/Yu15.html %V 40 %X We consider emphatic temporal-difference learning algorithms for policy evaluation in discounted Markov decision processes with finite spaces. Such algorithms were recently proposed by Sutton, Mahmood, and White (2015) as an improved solution to the problem of divergence of off-policy temporal-difference learning with linear function approximation. We present in this paper the first convergence proofs for two emphatic algorithms, ETD(λ) and ELSTD(λ). We prove, under general off-policy conditions, the convergence in L^1 for ELSTD(λ) iterates, and the almost sure convergence of the approximate value functions calculated by both algorithms using a single infinitely long trajectory. Our analysis involves new techniques with applications beyond emphatic algorithms leading, for example, to the first proof that standard TD(λ) also converges under off-policy training for λsufficiently large.
RIS
TY - CPAPER TI - On Convergence of Emphatic Temporal-Difference Learning AU - H. Yu BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Yu15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 1724 EP - 1751 L1 - http://proceedings.mlr.press/v40/Yu15.pdf UR - https://proceedings.mlr.press/v40/Yu15.html AB - We consider emphatic temporal-difference learning algorithms for policy evaluation in discounted Markov decision processes with finite spaces. Such algorithms were recently proposed by Sutton, Mahmood, and White (2015) as an improved solution to the problem of divergence of off-policy temporal-difference learning with linear function approximation. We present in this paper the first convergence proofs for two emphatic algorithms, ETD(λ) and ELSTD(λ). We prove, under general off-policy conditions, the convergence in L^1 for ELSTD(λ) iterates, and the almost sure convergence of the approximate value functions calculated by both algorithms using a single infinitely long trajectory. Our analysis involves new techniques with applications beyond emphatic algorithms leading, for example, to the first proof that standard TD(λ) also converges under off-policy training for λsufficiently large. ER -
APA
Yu, H.. (2015). On Convergence of Emphatic Temporal-Difference Learning. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:1724-1751 Available from https://proceedings.mlr.press/v40/Yu15.html.

Related Material