Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise

Maxim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, Hoi-To Wai
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2144-2203, 2020.

Abstract

Linear two-timescale stochastic approximation (SA) scheme is an important class of algorithms which has become popular in reinforcement learning (RL), particularly for the policy evaluation problem. Recently, a number of works have been devoted to establishing the finite time analysis of the scheme, especially under the Markovian (non-i.i.d.) noise settings that are ubiquitous in practice. In this paper, we provide a finite-time analysis for linear two timescale SA. Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain. With an appropriate step size schedule, the transient term in the expected error bound is $o(1/k^c)$ and the steady-state term is ${\cal O}(1/k)$, where $c>1$ and $k$ is the iteration number. Furthermore, we present an asymptotic expansion of the expected error with a matching lower bound of $\Omega(1/k)$. A simple numerical experiment is presented to support our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-kaledin20a, title = {Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise}, author = {Kaledin, Maxim and Moulines, Eric and Naumov, Alexey and Tadic, Vladislav and Wai, Hoi-To}, pages = {2144--2203}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/kaledin20a/kaledin20a.pdf}, url = {http://proceedings.mlr.press/v125/kaledin20a.html}, abstract = { Linear two-timescale stochastic approximation (SA) scheme is an important class of algorithms which has become popular in reinforcement learning (RL), particularly for the policy evaluation problem. Recently, a number of works have been devoted to establishing the finite time analysis of the scheme, especially under the Markovian (non-i.i.d.) noise settings that are ubiquitous in practice. In this paper, we provide a finite-time analysis for linear two timescale SA. Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain. With an appropriate step size schedule, the transient term in the expected error bound is $o(1/k^c)$ and the steady-state term is ${\cal O}(1/k)$, where $c>1$ and $k$ is the iteration number. Furthermore, we present an asymptotic expansion of the expected error with a matching lower bound of $\Omega(1/k)$. A simple numerical experiment is presented to support our theory.} }
Endnote
%0 Conference Paper %T Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise %A Maxim Kaledin %A Eric Moulines %A Alexey Naumov %A Vladislav Tadic %A Hoi-To Wai %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-kaledin20a %I PMLR %J Proceedings of Machine Learning Research %P 2144--2203 %U http://proceedings.mlr.press %V 125 %W PMLR %X Linear two-timescale stochastic approximation (SA) scheme is an important class of algorithms which has become popular in reinforcement learning (RL), particularly for the policy evaluation problem. Recently, a number of works have been devoted to establishing the finite time analysis of the scheme, especially under the Markovian (non-i.i.d.) noise settings that are ubiquitous in practice. In this paper, we provide a finite-time analysis for linear two timescale SA. Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain. With an appropriate step size schedule, the transient term in the expected error bound is $o(1/k^c)$ and the steady-state term is ${\cal O}(1/k)$, where $c>1$ and $k$ is the iteration number. Furthermore, we present an asymptotic expansion of the expected error with a matching lower bound of $\Omega(1/k)$. A simple numerical experiment is presented to support our theory.
APA
Kaledin, M., Moulines, E., Naumov, A., Tadic, V. & Wai, H.. (2020). Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:2144-2203

Related Material