Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise

Maxim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, Hoi-To Wai
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2144-2203, 2020.

Abstract

Linear two-timescale stochastic approximation (SA) scheme is an important class of algorithms which has become popular in reinforcement learning (RL), particularly for the policy evaluation problem. Recently, a number of works have been devoted to establishing the finite time analysis of the scheme, especially under the Markovian (non-i.i.d.) noise settings that are ubiquitous in practice. In this paper, we provide a finite-time analysis for linear two timescale SA. Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain. With an appropriate step size schedule, the transient term in the expected error bound is $o(1/k^c)$ and the steady-state term is ${\cal O}(1/k)$, where $c>1$ and $k$ is the iteration number. Furthermore, we present an asymptotic expansion of the expected error with a matching lower bound of $\Omega(1/k)$. A simple numerical experiment is presented to support our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-kaledin20a, title = {Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise}, author = {Kaledin, Maxim and Moulines, Eric and Naumov, Alexey and Tadic, Vladislav and Wai, Hoi-To}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {2144--2203}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/kaledin20a/kaledin20a.pdf}, url = {https://proceedings.mlr.press/v125/kaledin20a.html}, abstract = { Linear two-timescale stochastic approximation (SA) scheme is an important class of algorithms which has become popular in reinforcement learning (RL), particularly for the policy evaluation problem. Recently, a number of works have been devoted to establishing the finite time analysis of the scheme, especially under the Markovian (non-i.i.d.) noise settings that are ubiquitous in practice. In this paper, we provide a finite-time analysis for linear two timescale SA. Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain. With an appropriate step size schedule, the transient term in the expected error bound is $o(1/k^c)$ and the steady-state term is ${\cal O}(1/k)$, where $c>1$ and $k$ is the iteration number. Furthermore, we present an asymptotic expansion of the expected error with a matching lower bound of $\Omega(1/k)$. A simple numerical experiment is presented to support our theory.} }
Endnote
%0 Conference Paper %T Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise %A Maxim Kaledin %A Eric Moulines %A Alexey Naumov %A Vladislav Tadic %A Hoi-To Wai %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-kaledin20a %I PMLR %P 2144--2203 %U https://proceedings.mlr.press/v125/kaledin20a.html %V 125 %X Linear two-timescale stochastic approximation (SA) scheme is an important class of algorithms which has become popular in reinforcement learning (RL), particularly for the policy evaluation problem. Recently, a number of works have been devoted to establishing the finite time analysis of the scheme, especially under the Markovian (non-i.i.d.) noise settings that are ubiquitous in practice. In this paper, we provide a finite-time analysis for linear two timescale SA. Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain. With an appropriate step size schedule, the transient term in the expected error bound is $o(1/k^c)$ and the steady-state term is ${\cal O}(1/k)$, where $c>1$ and $k$ is the iteration number. Furthermore, we present an asymptotic expansion of the expected error with a matching lower bound of $\Omega(1/k)$. A simple numerical experiment is presented to support our theory.
APA
Kaledin, M., Moulines, E., Naumov, A., Tadic, V. & Wai, H.. (2020). Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:2144-2203 Available from https://proceedings.mlr.press/v125/kaledin20a.html.

Related Material