Momentum Improves Normalized SGD

Ashok Cutkosky, Harsh Mehta
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2260-2268, 2020.

Abstract

We provide an improved analysis of normalized SGD showing that adding momentum provably removes the need for large batch sizes on non-convex objectives. Then, we consider the case of objectives with bounded second derivative and show that in this case a small tweak to the momentum formula allows normalized SGD with momentum to find an $\epsilon$-critical point in $O(1/\epsilon^{3.5})$ iterations, matching the best-known rates without accruing any logarithmic factors or dependence on dimension. We provide an adaptive learning rate schedule that automatically improves convergence rates when the variance in the gradients is small. Finally, we show that our method is effective when employed on popular large scale tasks such as ResNet-50 and BERT pretraining, matching the performance of the disparate methods used to get state-of-the-art results on both tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-cutkosky20b, title = {Momentum Improves Normalized {SGD}}, author = {Cutkosky, Ashok and Mehta, Harsh}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2260--2268}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/cutkosky20b/cutkosky20b.pdf}, url = {https://proceedings.mlr.press/v119/cutkosky20b.html}, abstract = {We provide an improved analysis of normalized SGD showing that adding momentum provably removes the need for large batch sizes on non-convex objectives. Then, we consider the case of objectives with bounded second derivative and show that in this case a small tweak to the momentum formula allows normalized SGD with momentum to find an $\epsilon$-critical point in $O(1/\epsilon^{3.5})$ iterations, matching the best-known rates without accruing any logarithmic factors or dependence on dimension. We provide an adaptive learning rate schedule that automatically improves convergence rates when the variance in the gradients is small. Finally, we show that our method is effective when employed on popular large scale tasks such as ResNet-50 and BERT pretraining, matching the performance of the disparate methods used to get state-of-the-art results on both tasks.} }
Endnote
%0 Conference Paper %T Momentum Improves Normalized SGD %A Ashok Cutkosky %A Harsh Mehta %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-cutkosky20b %I PMLR %P 2260--2268 %U https://proceedings.mlr.press/v119/cutkosky20b.html %V 119 %X We provide an improved analysis of normalized SGD showing that adding momentum provably removes the need for large batch sizes on non-convex objectives. Then, we consider the case of objectives with bounded second derivative and show that in this case a small tweak to the momentum formula allows normalized SGD with momentum to find an $\epsilon$-critical point in $O(1/\epsilon^{3.5})$ iterations, matching the best-known rates without accruing any logarithmic factors or dependence on dimension. We provide an adaptive learning rate schedule that automatically improves convergence rates when the variance in the gradients is small. Finally, we show that our method is effective when employed on popular large scale tasks such as ResNet-50 and BERT pretraining, matching the performance of the disparate methods used to get state-of-the-art results on both tasks.
APA
Cutkosky, A. & Mehta, H.. (2020). Momentum Improves Normalized SGD. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2260-2268 Available from https://proceedings.mlr.press/v119/cutkosky20b.html.

Related Material