SGD without Replacement: Sharper Rates for General Smooth Convex Functions

Dheeraj Nagaraj, Prateek Jain, Praneeth Netrapalli
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4703-4711, 2019.

Abstract

We study stochastic gradient descent without replacement (SGDo) for smooth convex functions. SGDo is widely observed to converge faster than true SGD where each sample is drawn independently with replacement (Bottou,2009) and hence, is more popular in practice. But it’s convergence properties are not well understood as sampling without replacement leads to coupling between iterates and gradients. By using method of exchangeable pairs to bound Wasserstein distance, we provide the first non-asymptotic results for SGDo when applied to general smooth, strongly-convex functions. In particular, we show that SGDo converges at a rate of $O(1/K^2)$ while SGD is known to converge at $O(1/K)$ rate, where $K$ denotes the number of passes over data and is required to be large enough. Existing results for SGDo in this setting require additional Hessian Lipschitz assumption (Gurbuzbalaban et al, 2015; HaoChen and Sra 2018). For small $K$, we show SGDo can achieve same convergence rate as SGD for general smooth strongly-convex functions. Existing results in this setting require $K=1$ and hold only for generalized linear models (Shamir,2016). In addition, by careful analysis of the coupling, for both large and small $K$, we obtain better dependence on problem dependent parameters like condition number.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-nagaraj19a, title = {{SGD} without Replacement: Sharper Rates for General Smooth Convex Functions}, author = {Nagaraj, Dheeraj and Jain, Prateek and Netrapalli, Praneeth}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4703--4711}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/nagaraj19a/nagaraj19a.pdf}, url = {https://proceedings.mlr.press/v97/nagaraj19a.html}, abstract = {We study stochastic gradient descent without replacement (SGDo) for smooth convex functions. SGDo is widely observed to converge faster than true SGD where each sample is drawn independently with replacement (Bottou,2009) and hence, is more popular in practice. But it’s convergence properties are not well understood as sampling without replacement leads to coupling between iterates and gradients. By using method of exchangeable pairs to bound Wasserstein distance, we provide the first non-asymptotic results for SGDo when applied to general smooth, strongly-convex functions. In particular, we show that SGDo converges at a rate of $O(1/K^2)$ while SGD is known to converge at $O(1/K)$ rate, where $K$ denotes the number of passes over data and is required to be large enough. Existing results for SGDo in this setting require additional Hessian Lipschitz assumption (Gurbuzbalaban et al, 2015; HaoChen and Sra 2018). For small $K$, we show SGDo can achieve same convergence rate as SGD for general smooth strongly-convex functions. Existing results in this setting require $K=1$ and hold only for generalized linear models (Shamir,2016). In addition, by careful analysis of the coupling, for both large and small $K$, we obtain better dependence on problem dependent parameters like condition number.} }
Endnote
%0 Conference Paper %T SGD without Replacement: Sharper Rates for General Smooth Convex Functions %A Dheeraj Nagaraj %A Prateek Jain %A Praneeth Netrapalli %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-nagaraj19a %I PMLR %P 4703--4711 %U https://proceedings.mlr.press/v97/nagaraj19a.html %V 97 %X We study stochastic gradient descent without replacement (SGDo) for smooth convex functions. SGDo is widely observed to converge faster than true SGD where each sample is drawn independently with replacement (Bottou,2009) and hence, is more popular in practice. But it’s convergence properties are not well understood as sampling without replacement leads to coupling between iterates and gradients. By using method of exchangeable pairs to bound Wasserstein distance, we provide the first non-asymptotic results for SGDo when applied to general smooth, strongly-convex functions. In particular, we show that SGDo converges at a rate of $O(1/K^2)$ while SGD is known to converge at $O(1/K)$ rate, where $K$ denotes the number of passes over data and is required to be large enough. Existing results for SGDo in this setting require additional Hessian Lipschitz assumption (Gurbuzbalaban et al, 2015; HaoChen and Sra 2018). For small $K$, we show SGDo can achieve same convergence rate as SGD for general smooth strongly-convex functions. Existing results in this setting require $K=1$ and hold only for generalized linear models (Shamir,2016). In addition, by careful analysis of the coupling, for both large and small $K$, we obtain better dependence on problem dependent parameters like condition number.
APA
Nagaraj, D., Jain, P. & Netrapalli, P.. (2019). SGD without Replacement: Sharper Rates for General Smooth Convex Functions. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4703-4711 Available from https://proceedings.mlr.press/v97/nagaraj19a.html.

Related Material