Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent

Kangqiao Liu, Liu Ziyin, Masahito Ueda
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7045-7056, 2021.

Abstract

In the vanishing learning rate regime, stochastic gradient descent (SGD) is now relatively well understood. In this work, we propose to study the basic properties of SGD and its variants in the non-vanishing learning rate regime. The focus is on deriving exactly solvable results and discussing their implications. The main contributions of this work are to derive the stationary distribution for discrete-time SGD in a quadratic loss function with and without momentum; in particular, one implication of our result is that the fluctuation caused by discrete-time dynamics takes a distorted shape and is dramatically larger than a continuous-time theory could predict. Examples of applications of the proposed theory considered in this work include the approximation error of variants of SGD, the effect of minibatch noise, the optimal Bayesian inference, the escape rate from a sharp minimum, and the stationary covariance of a few second-order methods including damped Newton’s method, natural gradient descent, and Adam.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-liu21ad, title = {Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent}, author = {Liu, Kangqiao and Ziyin, Liu and Ueda, Masahito}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7045--7056}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/liu21ad/liu21ad.pdf}, url = {https://proceedings.mlr.press/v139/liu21ad.html}, abstract = {In the vanishing learning rate regime, stochastic gradient descent (SGD) is now relatively well understood. In this work, we propose to study the basic properties of SGD and its variants in the non-vanishing learning rate regime. The focus is on deriving exactly solvable results and discussing their implications. The main contributions of this work are to derive the stationary distribution for discrete-time SGD in a quadratic loss function with and without momentum; in particular, one implication of our result is that the fluctuation caused by discrete-time dynamics takes a distorted shape and is dramatically larger than a continuous-time theory could predict. Examples of applications of the proposed theory considered in this work include the approximation error of variants of SGD, the effect of minibatch noise, the optimal Bayesian inference, the escape rate from a sharp minimum, and the stationary covariance of a few second-order methods including damped Newton’s method, natural gradient descent, and Adam.} }
Endnote
%0 Conference Paper %T Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent %A Kangqiao Liu %A Liu Ziyin %A Masahito Ueda %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-liu21ad %I PMLR %P 7045--7056 %U https://proceedings.mlr.press/v139/liu21ad.html %V 139 %X In the vanishing learning rate regime, stochastic gradient descent (SGD) is now relatively well understood. In this work, we propose to study the basic properties of SGD and its variants in the non-vanishing learning rate regime. The focus is on deriving exactly solvable results and discussing their implications. The main contributions of this work are to derive the stationary distribution for discrete-time SGD in a quadratic loss function with and without momentum; in particular, one implication of our result is that the fluctuation caused by discrete-time dynamics takes a distorted shape and is dramatically larger than a continuous-time theory could predict. Examples of applications of the proposed theory considered in this work include the approximation error of variants of SGD, the effect of minibatch noise, the optimal Bayesian inference, the escape rate from a sharp minimum, and the stationary covariance of a few second-order methods including damped Newton’s method, natural gradient descent, and Adam.
APA
Liu, K., Ziyin, L. & Ueda, M.. (2021). Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7045-7056 Available from https://proceedings.mlr.press/v139/liu21ad.html.

Related Material