Understanding the unstable convergence of gradient descent

Kwangjun Ahn, Jingzhao Zhang, Suvrit Sra
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:247-257, 2022.

Abstract

Most existing analyses of (stochastic) gradient descent rely on the condition that for $L$-smooth costs, the step size is less than $2/L$. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent still converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ahn22a, title = {Understanding the unstable convergence of gradient descent}, author = {Ahn, Kwangjun and Zhang, Jingzhao and Sra, Suvrit}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {247--257}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ahn22a/ahn22a.pdf}, url = {https://proceedings.mlr.press/v162/ahn22a.html}, abstract = {Most existing analyses of (stochastic) gradient descent rely on the condition that for $L$-smooth costs, the step size is less than $2/L$. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent still converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon.} }
Endnote
%0 Conference Paper %T Understanding the unstable convergence of gradient descent %A Kwangjun Ahn %A Jingzhao Zhang %A Suvrit Sra %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ahn22a %I PMLR %P 247--257 %U https://proceedings.mlr.press/v162/ahn22a.html %V 162 %X Most existing analyses of (stochastic) gradient descent rely on the condition that for $L$-smooth costs, the step size is less than $2/L$. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent still converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon.
APA
Ahn, K., Zhang, J. & Sra, S.. (2022). Understanding the unstable convergence of gradient descent. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:247-257 Available from https://proceedings.mlr.press/v162/ahn22a.html.

Related Material