FiniteTime Error Bounds for Biased Stochastic Approximation with Applications to QLearning
[edit]
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:30153024, 2020.
Abstract
Inspired by the widespread use of Qlearning algorithms in reinforcement learning (RL), this present paper studies a class of biased stochastic approximation (SA) procedures under an ‘ergodiclike’ assumption on the underlying stochastic noise sequence. Leveraging a \emph{multistep Lyapunov function} that looks ahead to several future updates to accommodate the gradient bias, we prove a general result on the convergence of the iterates, and use it to derive finitetime bounds on the meansquare error in the case of constant stepsizes. This novel viewpoint renders the finitetime analysis of \emph{biased SA} algorithms under a broad family of stochastic perturbations possible. For direct comparison with past works, we also demonstrate these bounds by applying them to Qlearning with linear function approximation, under the realistic Markov chain observation model. The resultant finitetime error bound for Qlearning is \emph{the first of its kind}, in the sense that it holds: i) for the unmodified version (i.e., without making any modifications to the updates), and ii), for Markov chains starting from any initial distribution, at least one of which has to be violated for existing results to be applicable.
Related Material


