[edit]
Random Shuffling Beats SGD after Finite Epochs
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2624-2633, 2019.
Abstract
A long-standing problem in stochastic optimization is proving that \rsgd, the without-replacement version of \sgd, converges faster than the usual with-replacement \sgd. Building upon \citep{gurbuzbalaban2015random}, we present the first (to our knowledge) non-asymptotic results for this problem by proving that after a reasonable number of epochs \rsgd converges faster than \sgd. Specifically, we prove that for strongly convex, second-order smooth functions, the iterates of \rsgd converge to the optimal solution as O(\nicefrac1T2+\nicefracn3T3), where n is the number of components in the objective, and T is number of iterations. This result implies that after O(√n) epochs, \rsgd is strictly better than \sgd (which converges as O(\nicefrac1T)). The key step toward showing this better dependence on T is the introduction of n into the bound; and as our analysis shows, in general a dependence on n is unavoidable without further changes. To understand how \rsgd works in practice, we further explore two empirically useful settings: data sparsity and over-parameterization. For sparse data, \rsgd has the rate O(1T2), again strictly better than \sgd. Under a setting closely related to over-parameterization, \rsgd is shown to converge faster than \sgd after any arbitrary number of iterations. Finally, we extend the analysis of \rsgd to smooth non-convex and convex functions.