Central Limit Theorem for Two-Timescale Stochastic Approximation with Markovian Noise: Theory and Applications

Jie Hu, Vishwaraj Doshi, Do Young Eun
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1477-1485, 2024.

Abstract

Two-timescale stochastic approximation (TTSA) is among the most general frameworks for iterative stochastic algorithms. This includes well-known stochastic optimization methods such as SGD variants and those designed for bilevel or minimax problems, as well as reinforcement learning like the family of gradient-based temporal difference (GTD) algorithms. In this paper, we conduct an in-depth asymptotic analysis of TTSA under controlled Markovian noise via central limit theorem (CLT), uncovering the coupled dynamics of TTSA influenced by the underlying Markov chain, which has not been addressed by previous CLT results of TTSA only with Martingale difference noise. Building upon our CLT, we expand its application horizon of efficient sampling strategies from vanilla SGD to a wider TTSA context in distributed learning, thus broadening the scope of Hu et al. 2020. In addition, we leverage our CLT result to deduce the statistical properties of GTD algorithms with nonlinear function approximation using Markovian samples and show their identical asymptotic performance, a perspective not evident from current finite-time bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-hu24b, title = { Central Limit Theorem for Two-Timescale Stochastic Approximation with {M}arkovian Noise: Theory and Applications }, author = {Hu, Jie and Doshi, Vishwaraj and Young Eun, Do}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1477--1485}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/hu24b/hu24b.pdf}, url = {https://proceedings.mlr.press/v238/hu24b.html}, abstract = { Two-timescale stochastic approximation (TTSA) is among the most general frameworks for iterative stochastic algorithms. This includes well-known stochastic optimization methods such as SGD variants and those designed for bilevel or minimax problems, as well as reinforcement learning like the family of gradient-based temporal difference (GTD) algorithms. In this paper, we conduct an in-depth asymptotic analysis of TTSA under controlled Markovian noise via central limit theorem (CLT), uncovering the coupled dynamics of TTSA influenced by the underlying Markov chain, which has not been addressed by previous CLT results of TTSA only with Martingale difference noise. Building upon our CLT, we expand its application horizon of efficient sampling strategies from vanilla SGD to a wider TTSA context in distributed learning, thus broadening the scope of Hu et al. 2020. In addition, we leverage our CLT result to deduce the statistical properties of GTD algorithms with nonlinear function approximation using Markovian samples and show their identical asymptotic performance, a perspective not evident from current finite-time bounds. } }
Endnote
%0 Conference Paper %T Central Limit Theorem for Two-Timescale Stochastic Approximation with Markovian Noise: Theory and Applications %A Jie Hu %A Vishwaraj Doshi %A Do Young Eun %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-hu24b %I PMLR %P 1477--1485 %U https://proceedings.mlr.press/v238/hu24b.html %V 238 %X Two-timescale stochastic approximation (TTSA) is among the most general frameworks for iterative stochastic algorithms. This includes well-known stochastic optimization methods such as SGD variants and those designed for bilevel or minimax problems, as well as reinforcement learning like the family of gradient-based temporal difference (GTD) algorithms. In this paper, we conduct an in-depth asymptotic analysis of TTSA under controlled Markovian noise via central limit theorem (CLT), uncovering the coupled dynamics of TTSA influenced by the underlying Markov chain, which has not been addressed by previous CLT results of TTSA only with Martingale difference noise. Building upon our CLT, we expand its application horizon of efficient sampling strategies from vanilla SGD to a wider TTSA context in distributed learning, thus broadening the scope of Hu et al. 2020. In addition, we leverage our CLT result to deduce the statistical properties of GTD algorithms with nonlinear function approximation using Markovian samples and show their identical asymptotic performance, a perspective not evident from current finite-time bounds.
APA
Hu, J., Doshi, V. & Young Eun, D.. (2024). Central Limit Theorem for Two-Timescale Stochastic Approximation with Markovian Noise: Theory and Applications . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1477-1485 Available from https://proceedings.mlr.press/v238/hu24b.html.

Related Material