An Empirical Study of Stochastic Gradient Descent with Structured Covariance Noise

Yeming Wen, Kevin Luk, Maxime Gazeau, Guodong Zhang, Harris Chan, Jimmy Ba
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3621-3631, 2020.

Abstract

The choice of batch-size in a stochastic optimization algorithm plays a substantial role for both optimization and generalization. Increasing the batch-size used typically improves optimization but degrades generalization. To address the problem of improving generalization while maintaining optimal convergence in large-batch training, we propose to add covariance noise to the gradients. We demonstrate that the learning performance of our method is more accurately captured by the structure of the covariance matrix of the noise rather than by the variance of gradients. Moreover, over the convex-quadratic, we prove in theory that it can be characterized by the Frobenius norm of the noise matrix. Our empirical studies with standard deep learning model-architectures and datasets shows that our method not only improves generalization performance in large-batch training, but furthermore, does so in a way where the optimization performance remains desirable and the training duration is not elongated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-wen20a, title = {An Empirical Study of Stochastic Gradient Descent with Structured Covariance Noise}, author = {Wen, Yeming and Luk, Kevin and Gazeau, Maxime and Zhang, Guodong and Chan, Harris and Ba, Jimmy}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3621--3631}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/wen20a/wen20a.pdf}, url = {https://proceedings.mlr.press/v108/wen20a.html}, abstract = {The choice of batch-size in a stochastic optimization algorithm plays a substantial role for both optimization and generalization. Increasing the batch-size used typically improves optimization but degrades generalization. To address the problem of improving generalization while maintaining optimal convergence in large-batch training, we propose to add covariance noise to the gradients. We demonstrate that the learning performance of our method is more accurately captured by the structure of the covariance matrix of the noise rather than by the variance of gradients. Moreover, over the convex-quadratic, we prove in theory that it can be characterized by the Frobenius norm of the noise matrix. Our empirical studies with standard deep learning model-architectures and datasets shows that our method not only improves generalization performance in large-batch training, but furthermore, does so in a way where the optimization performance remains desirable and the training duration is not elongated.} }
Endnote
%0 Conference Paper %T An Empirical Study of Stochastic Gradient Descent with Structured Covariance Noise %A Yeming Wen %A Kevin Luk %A Maxime Gazeau %A Guodong Zhang %A Harris Chan %A Jimmy Ba %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-wen20a %I PMLR %P 3621--3631 %U https://proceedings.mlr.press/v108/wen20a.html %V 108 %X The choice of batch-size in a stochastic optimization algorithm plays a substantial role for both optimization and generalization. Increasing the batch-size used typically improves optimization but degrades generalization. To address the problem of improving generalization while maintaining optimal convergence in large-batch training, we propose to add covariance noise to the gradients. We demonstrate that the learning performance of our method is more accurately captured by the structure of the covariance matrix of the noise rather than by the variance of gradients. Moreover, over the convex-quadratic, we prove in theory that it can be characterized by the Frobenius norm of the noise matrix. Our empirical studies with standard deep learning model-architectures and datasets shows that our method not only improves generalization performance in large-batch training, but furthermore, does so in a way where the optimization performance remains desirable and the training duration is not elongated.
APA
Wen, Y., Luk, K., Gazeau, M., Zhang, G., Chan, H. & Ba, J.. (2020). An Empirical Study of Stochastic Gradient Descent with Structured Covariance Noise. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3621-3631 Available from https://proceedings.mlr.press/v108/wen20a.html.

Related Material