Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling

Zhenyu Sun, Ermin Wei
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:57754-57772, 2025.

Abstract

Unlike its vanilla counterpart with i.i.d. samples, stochastic optimization with Markovian sampling allows the sampling scheme following a Markov chain. This problem encompasses various applications that range from asynchronous distributed optimization to reinforcement learning. In this work, we lower bound the sample complexity of finding $\epsilon$-approximate critical solutions for any first-order methods when sampling is Markovian. We show that for samples drawn from stationary Markov processes with countable state space, any algorithm that accesses smooth, non-convex functions through queries to a stochastic gradient oracle, requires at least $\Omega(\epsilon^{-4})$ samples. Moreover, for finite Markov chains, we show a $\Omega(\epsilon^{-2})$ lower bound and propose a new algorithm, called MaC-SAGE, that is proven to (nearly) match our lower bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-sun25u, title = {Improved Lower Bounds for First-order Stochastic Non-convex Optimization under {M}arkov Sampling}, author = {Sun, Zhenyu and Wei, Ermin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {57754--57772}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/sun25u/sun25u.pdf}, url = {https://proceedings.mlr.press/v267/sun25u.html}, abstract = {Unlike its vanilla counterpart with i.i.d. samples, stochastic optimization with Markovian sampling allows the sampling scheme following a Markov chain. This problem encompasses various applications that range from asynchronous distributed optimization to reinforcement learning. In this work, we lower bound the sample complexity of finding $\epsilon$-approximate critical solutions for any first-order methods when sampling is Markovian. We show that for samples drawn from stationary Markov processes with countable state space, any algorithm that accesses smooth, non-convex functions through queries to a stochastic gradient oracle, requires at least $\Omega(\epsilon^{-4})$ samples. Moreover, for finite Markov chains, we show a $\Omega(\epsilon^{-2})$ lower bound and propose a new algorithm, called MaC-SAGE, that is proven to (nearly) match our lower bound.} }
Endnote
%0 Conference Paper %T Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling %A Zhenyu Sun %A Ermin Wei %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-sun25u %I PMLR %P 57754--57772 %U https://proceedings.mlr.press/v267/sun25u.html %V 267 %X Unlike its vanilla counterpart with i.i.d. samples, stochastic optimization with Markovian sampling allows the sampling scheme following a Markov chain. This problem encompasses various applications that range from asynchronous distributed optimization to reinforcement learning. In this work, we lower bound the sample complexity of finding $\epsilon$-approximate critical solutions for any first-order methods when sampling is Markovian. We show that for samples drawn from stationary Markov processes with countable state space, any algorithm that accesses smooth, non-convex functions through queries to a stochastic gradient oracle, requires at least $\Omega(\epsilon^{-4})$ samples. Moreover, for finite Markov chains, we show a $\Omega(\epsilon^{-2})$ lower bound and propose a new algorithm, called MaC-SAGE, that is proven to (nearly) match our lower bound.
APA
Sun, Z. & Wei, E.. (2025). Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:57754-57772 Available from https://proceedings.mlr.press/v267/sun25u.html.

Related Material