Random Shuffling Beats SGD after Finite Epochs

Jeff Haochen, Suvrit Sra
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2624-2633, 2019.

Abstract

A long-standing problem in stochastic optimization is proving that \rsgd, the without-replacement version of \sgd, converges faster than the usual with-replacement \sgd. Building upon \citep{gurbuzbalaban2015random}, we present the first (to our knowledge) non-asymptotic results for this problem by proving that after a reasonable number of epochs \rsgd converges faster than \sgd. Specifically, we prove that for strongly convex, second-order smooth functions, the iterates of \rsgd converge to the optimal solution as $\mathcal{O}(\nicefrac{1}{T^2} + \nicefrac{n^3}{T^3})$, where $n$ is the number of components in the objective, and $T$ is number of iterations. This result implies that after $\mathcal{O}(\sqrt{n})$ epochs, \rsgd is strictly better than \sgd (which converges as $\mathcal{O}(\nicefrac{1}{T})$). The key step toward showing this better dependence on $T$ is the introduction of $n$ into the bound; and as our analysis shows, in general a dependence on $n$ is unavoidable without further changes. To understand how \rsgd works in practice, we further explore two empirically useful settings: data sparsity and over-parameterization. For sparse data, \rsgd has the rate $\mathcal{O}\left(\frac{1}{T^2}\right)$, again strictly better than \sgd. Under a setting closely related to over-parameterization, \rsgd is shown to converge faster than \sgd after any arbitrary number of iterations. Finally, we extend the analysis of \rsgd to smooth non-convex and convex functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-haochen19a, title = {Random Shuffling Beats {SGD} after Finite Epochs}, author = {Haochen, Jeff and Sra, Suvrit}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2624--2633}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/haochen19a/haochen19a.pdf}, url = {https://proceedings.mlr.press/v97/haochen19a.html}, abstract = {A long-standing problem in stochastic optimization is proving that \rsgd, the without-replacement version of \sgd, converges faster than the usual with-replacement \sgd. Building upon \citep{gurbuzbalaban2015random}, we present the first (to our knowledge) non-asymptotic results for this problem by proving that after a reasonable number of epochs \rsgd converges faster than \sgd. Specifically, we prove that for strongly convex, second-order smooth functions, the iterates of \rsgd converge to the optimal solution as $\mathcal{O}(\nicefrac{1}{T^2} + \nicefrac{n^3}{T^3})$, where $n$ is the number of components in the objective, and $T$ is number of iterations. This result implies that after $\mathcal{O}(\sqrt{n})$ epochs, \rsgd is strictly better than \sgd (which converges as $\mathcal{O}(\nicefrac{1}{T})$). The key step toward showing this better dependence on $T$ is the introduction of $n$ into the bound; and as our analysis shows, in general a dependence on $n$ is unavoidable without further changes. To understand how \rsgd works in practice, we further explore two empirically useful settings: data sparsity and over-parameterization. For sparse data, \rsgd has the rate $\mathcal{O}\left(\frac{1}{T^2}\right)$, again strictly better than \sgd. Under a setting closely related to over-parameterization, \rsgd is shown to converge faster than \sgd after any arbitrary number of iterations. Finally, we extend the analysis of \rsgd to smooth non-convex and convex functions.} }
Endnote
%0 Conference Paper %T Random Shuffling Beats SGD after Finite Epochs %A Jeff Haochen %A Suvrit Sra %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-haochen19a %I PMLR %P 2624--2633 %U https://proceedings.mlr.press/v97/haochen19a.html %V 97 %X A long-standing problem in stochastic optimization is proving that \rsgd, the without-replacement version of \sgd, converges faster than the usual with-replacement \sgd. Building upon \citep{gurbuzbalaban2015random}, we present the first (to our knowledge) non-asymptotic results for this problem by proving that after a reasonable number of epochs \rsgd converges faster than \sgd. Specifically, we prove that for strongly convex, second-order smooth functions, the iterates of \rsgd converge to the optimal solution as $\mathcal{O}(\nicefrac{1}{T^2} + \nicefrac{n^3}{T^3})$, where $n$ is the number of components in the objective, and $T$ is number of iterations. This result implies that after $\mathcal{O}(\sqrt{n})$ epochs, \rsgd is strictly better than \sgd (which converges as $\mathcal{O}(\nicefrac{1}{T})$). The key step toward showing this better dependence on $T$ is the introduction of $n$ into the bound; and as our analysis shows, in general a dependence on $n$ is unavoidable without further changes. To understand how \rsgd works in practice, we further explore two empirically useful settings: data sparsity and over-parameterization. For sparse data, \rsgd has the rate $\mathcal{O}\left(\frac{1}{T^2}\right)$, again strictly better than \sgd. Under a setting closely related to over-parameterization, \rsgd is shown to converge faster than \sgd after any arbitrary number of iterations. Finally, we extend the analysis of \rsgd to smooth non-convex and convex functions.
APA
Haochen, J. & Sra, S.. (2019). Random Shuffling Beats SGD after Finite Epochs. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2624-2633 Available from https://proceedings.mlr.press/v97/haochen19a.html.

Related Material