On the Noisy Gradient Descent that Generalizes as SGD

Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, Zhanxing Zhu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10367-10376, 2020.

Abstract

The gradient noise of SGD is considered to play a central role in the observed strong generalization abilities of deep learning. While past studies confirm that the magnitude and the covariance structure of gradient noise are critical for regularization, it remains unclear whether or not the class of noise distributions is important. In this work we provide negative results by showing that noises in classes different from the SGD noise can also effectively regularize gradient descent. Our finding is based on a novel observation on the structure of the SGD noise: it is the multiplication of the gradient matrix and a sampling noise that arises from the mini-batch sampling procedure. Moreover, the sampling noises unify two kinds of gradient regularizing noises that belong to the Gaussian class: the one using (scaled) Fisher as covariance and the one using the gradient covariance of SGD as covariance. Finally, thanks to the flexibility of choosing noise class, an algorithm is proposed to perform noisy gradient descent that generalizes well, the variant of which even benefits large batch SGD training without hurting generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wu20c, title = {On the Noisy Gradient Descent that Generalizes as {SGD}}, author = {Wu, Jingfeng and Hu, Wenqing and Xiong, Haoyi and Huan, Jun and Braverman, Vladimir and Zhu, Zhanxing}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10367--10376}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wu20c/wu20c.pdf}, url = {https://proceedings.mlr.press/v119/wu20c.html}, abstract = {The gradient noise of SGD is considered to play a central role in the observed strong generalization abilities of deep learning. While past studies confirm that the magnitude and the covariance structure of gradient noise are critical for regularization, it remains unclear whether or not the class of noise distributions is important. In this work we provide negative results by showing that noises in classes different from the SGD noise can also effectively regularize gradient descent. Our finding is based on a novel observation on the structure of the SGD noise: it is the multiplication of the gradient matrix and a sampling noise that arises from the mini-batch sampling procedure. Moreover, the sampling noises unify two kinds of gradient regularizing noises that belong to the Gaussian class: the one using (scaled) Fisher as covariance and the one using the gradient covariance of SGD as covariance. Finally, thanks to the flexibility of choosing noise class, an algorithm is proposed to perform noisy gradient descent that generalizes well, the variant of which even benefits large batch SGD training without hurting generalization.} }
Endnote
%0 Conference Paper %T On the Noisy Gradient Descent that Generalizes as SGD %A Jingfeng Wu %A Wenqing Hu %A Haoyi Xiong %A Jun Huan %A Vladimir Braverman %A Zhanxing Zhu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wu20c %I PMLR %P 10367--10376 %U https://proceedings.mlr.press/v119/wu20c.html %V 119 %X The gradient noise of SGD is considered to play a central role in the observed strong generalization abilities of deep learning. While past studies confirm that the magnitude and the covariance structure of gradient noise are critical for regularization, it remains unclear whether or not the class of noise distributions is important. In this work we provide negative results by showing that noises in classes different from the SGD noise can also effectively regularize gradient descent. Our finding is based on a novel observation on the structure of the SGD noise: it is the multiplication of the gradient matrix and a sampling noise that arises from the mini-batch sampling procedure. Moreover, the sampling noises unify two kinds of gradient regularizing noises that belong to the Gaussian class: the one using (scaled) Fisher as covariance and the one using the gradient covariance of SGD as covariance. Finally, thanks to the flexibility of choosing noise class, an algorithm is proposed to perform noisy gradient descent that generalizes well, the variant of which even benefits large batch SGD training without hurting generalization.
APA
Wu, J., Hu, W., Xiong, H., Huan, J., Braverman, V. & Zhu, Z.. (2020). On the Noisy Gradient Descent that Generalizes as SGD. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10367-10376 Available from https://proceedings.mlr.press/v119/wu20c.html.

Related Material