Special Properties of Gradient Descent with Large Learning Rates

Amirkeivan Mohtashami, Martin Jaggi, Sebastian U Stich
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:25082-25104, 2023.

Abstract

When training neural networks, it has been widely observed that a large step size is essential in stochastic gradient descent (SGD) for obtaining superior models. However, the effect of large step sizes on the success of SGD is not well understood theoretically. Several previous works have attributed this success to the stochastic noise present in SGD. However, we show through a novel set of experiments that the stochastic noise is not sufficient to explain good non-convex training, and that instead the effect of a large learning rate itself is essential for obtaining best performance.We demonstrate the same effects also in the noise-less case, i.e. for full-batch GD. We formally prove that GD with large step size —on certain non-convex function classes — follows a different trajectory than GD with a small step size, which can lead to convergence to a global minimum instead of a local one. Our settings provide a framework for future analysis which allows comparing algorithms based on behaviors that can not be observed in the traditional settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-mohtashami23a, title = {Special Properties of Gradient Descent with Large Learning Rates}, author = {Mohtashami, Amirkeivan and Jaggi, Martin and Stich, Sebastian U}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {25082--25104}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/mohtashami23a/mohtashami23a.pdf}, url = {https://proceedings.mlr.press/v202/mohtashami23a.html}, abstract = {When training neural networks, it has been widely observed that a large step size is essential in stochastic gradient descent (SGD) for obtaining superior models. However, the effect of large step sizes on the success of SGD is not well understood theoretically. Several previous works have attributed this success to the stochastic noise present in SGD. However, we show through a novel set of experiments that the stochastic noise is not sufficient to explain good non-convex training, and that instead the effect of a large learning rate itself is essential for obtaining best performance.We demonstrate the same effects also in the noise-less case, i.e. for full-batch GD. We formally prove that GD with large step size —on certain non-convex function classes — follows a different trajectory than GD with a small step size, which can lead to convergence to a global minimum instead of a local one. Our settings provide a framework for future analysis which allows comparing algorithms based on behaviors that can not be observed in the traditional settings.} }
Endnote
%0 Conference Paper %T Special Properties of Gradient Descent with Large Learning Rates %A Amirkeivan Mohtashami %A Martin Jaggi %A Sebastian U Stich %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-mohtashami23a %I PMLR %P 25082--25104 %U https://proceedings.mlr.press/v202/mohtashami23a.html %V 202 %X When training neural networks, it has been widely observed that a large step size is essential in stochastic gradient descent (SGD) for obtaining superior models. However, the effect of large step sizes on the success of SGD is not well understood theoretically. Several previous works have attributed this success to the stochastic noise present in SGD. However, we show through a novel set of experiments that the stochastic noise is not sufficient to explain good non-convex training, and that instead the effect of a large learning rate itself is essential for obtaining best performance.We demonstrate the same effects also in the noise-less case, i.e. for full-batch GD. We formally prove that GD with large step size —on certain non-convex function classes — follows a different trajectory than GD with a small step size, which can lead to convergence to a global minimum instead of a local one. Our settings provide a framework for future analysis which allows comparing algorithms based on behaviors that can not be observed in the traditional settings.
APA
Mohtashami, A., Jaggi, M. & Stich, S.U.. (2023). Special Properties of Gradient Descent with Large Learning Rates. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:25082-25104 Available from https://proceedings.mlr.press/v202/mohtashami23a.html.

Related Material