Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise

Zijian Liu, Jiawei Zhang, Zhengyuan Zhou
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:2266-2290, 2023.

Abstract

In this paper, we consider the stochastic optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime, where the stochastic gradient’s noise is assumed to have bounded $p$th moment ($p\in(1,2]$). This is motivated by a recent plethora of studies in the machine learning literature, which point out that, in comparison to the standard finite-variance assumption, the heavy-tailed noise regime is more appropriate for modern machine learning tasks such as training neural networks. In the heavy-tailed noise regime, Zhang et al. (2020) is the first to prove the $\Omega(T^{\frac{1-p}{3p-2}})$ lower bound for convergence (in expectation) and provides a simple clipping algorithm that matches this optimal rate. Later, Cutkosky and Mehta (2021) proposes another algorithm, which is shown to achieve the nearly optimal high-probability convergence guarantee $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, where $\delta$ is the probability of failure. However, this desirable guarantee is only established under the additional assumption that the stochastic gradient itself is bounded in $p$th moment, which fails to hold even for quadratic objectives and centered Gaussian noise. In this work, we first improve the analysis of the algorithm in Later, Cutkosky and Mehta (2021) to obtain the same nearly optimal high-probability convergence rate $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, without the above-mentioned restrictive assumption. Next, and curiously, we show that one can achieve a faster rate than that dictated by the lower bound $\Omega(T^{\frac{1-p}{3p-2}})$ with only a tiny bit of structure, i.e., when the objective function $F(x)$ is assumed to be in the form of $\E_{\Xi\sim\domxi}[f(x,\Xi)]$, arguably the most widely applicable class of stochastic optimization problems. For this class of problems, we propose the first variance-reduced accelerated algorithm and establish that it guarantees a high-probability convergence rate of $O(\log(T/\delta)T^{\frac{1-p}{2p-1}})$ under a mild condition, which is faster than $\Omega(T^{\frac{1-p}{3p-2}})$. Notably, even when specialized to the standard finite-variance case ($p =2$), our result yields the (near-)optimal high-probability rate $O(\log(T/\delta)T^{-1/3})$, which is unknown before.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-liu23c, title = {Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise}, author = {Liu, Zijian and Zhang, Jiawei and Zhou, Zhengyuan}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {2266--2290}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/liu23c/liu23c.pdf}, url = {https://proceedings.mlr.press/v195/liu23c.html}, abstract = {In this paper, we consider the stochastic optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime, where the stochastic gradient’s noise is assumed to have bounded $p$th moment ($p\in(1,2]$). This is motivated by a recent plethora of studies in the machine learning literature, which point out that, in comparison to the standard finite-variance assumption, the heavy-tailed noise regime is more appropriate for modern machine learning tasks such as training neural networks. In the heavy-tailed noise regime, Zhang et al. (2020) is the first to prove the $\Omega(T^{\frac{1-p}{3p-2}})$ lower bound for convergence (in expectation) and provides a simple clipping algorithm that matches this optimal rate. Later, Cutkosky and Mehta (2021) proposes another algorithm, which is shown to achieve the nearly optimal high-probability convergence guarantee $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, where $\delta$ is the probability of failure. However, this desirable guarantee is only established under the additional assumption that the stochastic gradient itself is bounded in $p$th moment, which fails to hold even for quadratic objectives and centered Gaussian noise. In this work, we first improve the analysis of the algorithm in Later, Cutkosky and Mehta (2021) to obtain the same nearly optimal high-probability convergence rate $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, without the above-mentioned restrictive assumption. Next, and curiously, we show that one can achieve a faster rate than that dictated by the lower bound $\Omega(T^{\frac{1-p}{3p-2}})$ with only a tiny bit of structure, i.e., when the objective function $F(x)$ is assumed to be in the form of $\E_{\Xi\sim\domxi}[f(x,\Xi)]$, arguably the most widely applicable class of stochastic optimization problems. For this class of problems, we propose the first variance-reduced accelerated algorithm and establish that it guarantees a high-probability convergence rate of $O(\log(T/\delta)T^{\frac{1-p}{2p-1}})$ under a mild condition, which is faster than $\Omega(T^{\frac{1-p}{3p-2}})$. Notably, even when specialized to the standard finite-variance case ($p =2$), our result yields the (near-)optimal high-probability rate $O(\log(T/\delta)T^{-1/3})$, which is unknown before.} }
Endnote
%0 Conference Paper %T Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise %A Zijian Liu %A Jiawei Zhang %A Zhengyuan Zhou %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-liu23c %I PMLR %P 2266--2290 %U https://proceedings.mlr.press/v195/liu23c.html %V 195 %X In this paper, we consider the stochastic optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime, where the stochastic gradient’s noise is assumed to have bounded $p$th moment ($p\in(1,2]$). This is motivated by a recent plethora of studies in the machine learning literature, which point out that, in comparison to the standard finite-variance assumption, the heavy-tailed noise regime is more appropriate for modern machine learning tasks such as training neural networks. In the heavy-tailed noise regime, Zhang et al. (2020) is the first to prove the $\Omega(T^{\frac{1-p}{3p-2}})$ lower bound for convergence (in expectation) and provides a simple clipping algorithm that matches this optimal rate. Later, Cutkosky and Mehta (2021) proposes another algorithm, which is shown to achieve the nearly optimal high-probability convergence guarantee $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, where $\delta$ is the probability of failure. However, this desirable guarantee is only established under the additional assumption that the stochastic gradient itself is bounded in $p$th moment, which fails to hold even for quadratic objectives and centered Gaussian noise. In this work, we first improve the analysis of the algorithm in Later, Cutkosky and Mehta (2021) to obtain the same nearly optimal high-probability convergence rate $O(\log(T/\delta)T^{\frac{1-p}{3p-2}})$, without the above-mentioned restrictive assumption. Next, and curiously, we show that one can achieve a faster rate than that dictated by the lower bound $\Omega(T^{\frac{1-p}{3p-2}})$ with only a tiny bit of structure, i.e., when the objective function $F(x)$ is assumed to be in the form of $\E_{\Xi\sim\domxi}[f(x,\Xi)]$, arguably the most widely applicable class of stochastic optimization problems. For this class of problems, we propose the first variance-reduced accelerated algorithm and establish that it guarantees a high-probability convergence rate of $O(\log(T/\delta)T^{\frac{1-p}{2p-1}})$ under a mild condition, which is faster than $\Omega(T^{\frac{1-p}{3p-2}})$. Notably, even when specialized to the standard finite-variance case ($p =2$), our result yields the (near-)optimal high-probability rate $O(\log(T/\delta)T^{-1/3})$, which is unknown before.
APA
Liu, Z., Zhang, J. & Zhou, Z.. (2023). Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:2266-2290 Available from https://proceedings.mlr.press/v195/liu23c.html.

Related Material