Toward Understanding the Importance of Noise in Training Neural Networks

Mo Zhou, Tianyi Liu, Yan Li, Dachao Lin, Enlu Zhou, Tuo Zhao
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7594-7602, 2019.

Abstract

Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of deep neural networks. The theory behind, however, is still largely unknown. This paper studies this fundamental problem through training a simple two-layer convolutional neural network model. Although training such a network requires to solve a non-convex optimization problem with a spurious local optimum and a global optimum, we prove that a perturbed gradient descent algorithm in conjunction with noise annealing is guaranteed to converge to a global optimum in polynomial time with arbitrary initialization. This implies that the noise enables the algorithm to efficiently escape from the spurious local optimum. Numerical experiments are provided to support our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhou19d, title = {Toward Understanding the Importance of Noise in Training Neural Networks}, author = {Zhou, Mo and Liu, Tianyi and Li, Yan and Lin, Dachao and Zhou, Enlu and Zhao, Tuo}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7594--7602}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhou19d/zhou19d.pdf}, url = {https://proceedings.mlr.press/v97/zhou19d.html}, abstract = {Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of deep neural networks. The theory behind, however, is still largely unknown. This paper studies this fundamental problem through training a simple two-layer convolutional neural network model. Although training such a network requires to solve a non-convex optimization problem with a spurious local optimum and a global optimum, we prove that a perturbed gradient descent algorithm in conjunction with noise annealing is guaranteed to converge to a global optimum in polynomial time with arbitrary initialization. This implies that the noise enables the algorithm to efficiently escape from the spurious local optimum. Numerical experiments are provided to support our theory.} }
Endnote
%0 Conference Paper %T Toward Understanding the Importance of Noise in Training Neural Networks %A Mo Zhou %A Tianyi Liu %A Yan Li %A Dachao Lin %A Enlu Zhou %A Tuo Zhao %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhou19d %I PMLR %P 7594--7602 %U https://proceedings.mlr.press/v97/zhou19d.html %V 97 %X Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of deep neural networks. The theory behind, however, is still largely unknown. This paper studies this fundamental problem through training a simple two-layer convolutional neural network model. Although training such a network requires to solve a non-convex optimization problem with a spurious local optimum and a global optimum, we prove that a perturbed gradient descent algorithm in conjunction with noise annealing is guaranteed to converge to a global optimum in polynomial time with arbitrary initialization. This implies that the noise enables the algorithm to efficiently escape from the spurious local optimum. Numerical experiments are provided to support our theory.
APA
Zhou, M., Liu, T., Li, Y., Lin, D., Zhou, E. & Zhao, T.. (2019). Toward Understanding the Importance of Noise in Training Neural Networks. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7594-7602 Available from https://proceedings.mlr.press/v97/zhou19d.html.

Related Material