Finite Sample Analysis of TwoTimescale Stochastic Approximation with Applications to Reinforcement Learning
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:11991233, 2018.
Abstract
Twotimescale Stochastic Approximation (SA) algorithms are widely used in Reinforcement Learning (RL). Their iterates have two parts that are updated using distinct stepsizes. In this work, we develop a novel recipe for their finite sample analysis. Using this, we provide a concentration bound, which is the first such result for a twotimescale SA. The type of bound we obtain is known as “lockin probability”. We also introduce a new projection scheme, in which the time between successive projections increases exponentially. This scheme allows one to elegantly transform a lockin probability into a convergence rate result for projected twotimescale SA. From this latter result, we then extract key insights on stepsize selection. As an application, we finally obtain convergence rates for the projected twotimescale RL algorithms GTD(0), GTD2, and TDC.
Related Material


