Comparing Dynamics: Deep Neural Networks versus Glassy Systems

Marco Baity-Jesi, Levent Sagun, Mario Geiger, Stefano Spigler, Gerard Ben Arous, Chiara Cammarota, Yann LeCun, Matthieu Wyart, Giulio Biroli
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:314-323, 2018.

Abstract

We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are the complexity of the loss-landscape and of the dynamics within it, and to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and data-sets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large times, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, thus showing that the statistical properties of the corresponding loss and energy landscapes are different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-baity-jesi18a, title = {Comparing Dynamics: Deep Neural Networks versus Glassy Systems}, author = {Baity-Jesi, Marco and Sagun, Levent and Geiger, Mario and Spigler, Stefano and Arous, Gerard Ben and Cammarota, Chiara and LeCun, Yann and Wyart, Matthieu and Biroli, Giulio}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {314--323}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/baity-jesi18a/baity-jesi18a.pdf}, url = {https://proceedings.mlr.press/v80/baity-jesi18a.html}, abstract = {We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are the complexity of the loss-landscape and of the dynamics within it, and to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and data-sets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large times, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, thus showing that the statistical properties of the corresponding loss and energy landscapes are different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.} }
Endnote
%0 Conference Paper %T Comparing Dynamics: Deep Neural Networks versus Glassy Systems %A Marco Baity-Jesi %A Levent Sagun %A Mario Geiger %A Stefano Spigler %A Gerard Ben Arous %A Chiara Cammarota %A Yann LeCun %A Matthieu Wyart %A Giulio Biroli %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-baity-jesi18a %I PMLR %P 314--323 %U https://proceedings.mlr.press/v80/baity-jesi18a.html %V 80 %X We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are the complexity of the loss-landscape and of the dynamics within it, and to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and data-sets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large times, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, thus showing that the statistical properties of the corresponding loss and energy landscapes are different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.
APA
Baity-Jesi, M., Sagun, L., Geiger, M., Spigler, S., Arous, G.B., Cammarota, C., LeCun, Y., Wyart, M. & Biroli, G.. (2018). Comparing Dynamics: Deep Neural Networks versus Glassy Systems. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:314-323 Available from https://proceedings.mlr.press/v80/baity-jesi18a.html.

Related Material