SMG: A Shuffling Gradient-Based Method with Momentum

Trang H Tran, Lam M Nguyen, Quoc Tran-Dinh
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10379-10389, 2021.

Abstract

We combine two advanced ideas widely used in optimization for machine learning: \textit{shuffling} strategy and \textit{momentum} technique to develop a novel shuffling gradient-based method with momentum, coined \textbf{S}huffling \textbf{M}omentum \textbf{G}radient (SMG), for non-convex finite-sum optimization problems. While our method is inspired by momentum techniques, its update is fundamentally different from existing momentum-based methods. We establish state-of-the-art convergence rates of SMG for any shuffling strategy using either constant or diminishing learning rate under standard assumptions (i.e. \textit{$L$-smoothness} and \textit{bounded variance}). When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods. Our tests have shown encouraging performance of the new algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-tran21b, title = {SMG: A Shuffling Gradient-Based Method with Momentum}, author = {Tran, Trang H and Nguyen, Lam M and Tran-Dinh, Quoc}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10379--10389}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/tran21b/tran21b.pdf}, url = {https://proceedings.mlr.press/v139/tran21b.html}, abstract = {We combine two advanced ideas widely used in optimization for machine learning: \textit{shuffling} strategy and \textit{momentum} technique to develop a novel shuffling gradient-based method with momentum, coined \textbf{S}huffling \textbf{M}omentum \textbf{G}radient (SMG), for non-convex finite-sum optimization problems. While our method is inspired by momentum techniques, its update is fundamentally different from existing momentum-based methods. We establish state-of-the-art convergence rates of SMG for any shuffling strategy using either constant or diminishing learning rate under standard assumptions (i.e. \textit{$L$-smoothness} and \textit{bounded variance}). When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods. Our tests have shown encouraging performance of the new algorithms.} }
Endnote
%0 Conference Paper %T SMG: A Shuffling Gradient-Based Method with Momentum %A Trang H Tran %A Lam M Nguyen %A Quoc Tran-Dinh %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-tran21b %I PMLR %P 10379--10389 %U https://proceedings.mlr.press/v139/tran21b.html %V 139 %X We combine two advanced ideas widely used in optimization for machine learning: \textit{shuffling} strategy and \textit{momentum} technique to develop a novel shuffling gradient-based method with momentum, coined \textbf{S}huffling \textbf{M}omentum \textbf{G}radient (SMG), for non-convex finite-sum optimization problems. While our method is inspired by momentum techniques, its update is fundamentally different from existing momentum-based methods. We establish state-of-the-art convergence rates of SMG for any shuffling strategy using either constant or diminishing learning rate under standard assumptions (i.e. \textit{$L$-smoothness} and \textit{bounded variance}). When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods. Our tests have shown encouraging performance of the new algorithms.
APA
Tran, T.H., Nguyen, L.M. & Tran-Dinh, Q.. (2021). SMG: A Shuffling Gradient-Based Method with Momentum. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10379-10389 Available from https://proceedings.mlr.press/v139/tran21b.html.

Related Material