[edit]
On Convergence of Emphatic Temporal-Difference Learning
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1724-1751, 2015.
Abstract
We consider emphatic temporal-difference learning algorithms for policy evaluation in discounted Markov decision processes with finite spaces. Such algorithms were recently proposed by Sutton, Mahmood, and White (2015) as an improved solution to the problem of divergence of off-policy temporal-difference learning with linear function approximation. We present in this paper the first convergence proofs for two emphatic algorithms, ETD(λ) and ELSTD(λ). We prove, under general off-policy conditions, the convergence in L^1 for ELSTD(λ) iterates, and the almost sure convergence of the approximate value functions calculated by both algorithms using a single infinitely long trajectory. Our analysis involves new techniques with applications beyond emphatic algorithms leading, for example, to the first proof that standard TD(λ) also converges under off-policy training for λsufficiently large.