Sharp Minima Can Generalize For Deep Nets

Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1019-1028, 2017.

Abstract

Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g.\ Hochreiter \& Schmidhuber (1997); Keskar et al.\ (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Or, depending on the definition of flatness, it is the same for any given minimum. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-dinh17b, title = {Sharp Minima Can Generalize For Deep Nets}, author = {Laurent Dinh and Razvan Pascanu and Samy Bengio and Yoshua Bengio}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1019--1028}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/dinh17b/dinh17b.pdf}, url = {https://proceedings.mlr.press/v70/dinh17b.html}, abstract = {Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g.\ Hochreiter \& Schmidhuber (1997); Keskar et al.\ (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Or, depending on the definition of flatness, it is the same for any given minimum. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.} }
Endnote
%0 Conference Paper %T Sharp Minima Can Generalize For Deep Nets %A Laurent Dinh %A Razvan Pascanu %A Samy Bengio %A Yoshua Bengio %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-dinh17b %I PMLR %P 1019--1028 %U https://proceedings.mlr.press/v70/dinh17b.html %V 70 %X Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g.\ Hochreiter \& Schmidhuber (1997); Keskar et al.\ (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Or, depending on the definition of flatness, it is the same for any given minimum. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
APA
Dinh, L., Pascanu, R., Bengio, S. & Bengio, Y.. (2017). Sharp Minima Can Generalize For Deep Nets. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1019-1028 Available from https://proceedings.mlr.press/v70/dinh17b.html.

Related Material