On Energy-Based Models with Overparametrized Shallow Neural Networks

Carles Domingo-Enrich, Alberto Bietti, Eric Vanden-Eijnden, Joan Bruna
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2771-2782, 2021.

Abstract

Energy-based models (EBMs) are a simple yet powerful framework for generative modeling. They are based on a trainable energy function which defines an associated Gibbs measure, and they can be trained and sampled from via well-established statistical tools, such as MCMC. Neural networks may be used as energy function approximators, providing both a rich class of expressive models as well as a flexible device to incorporate data structure. In this work we focus on shallow neural networks. Building from the incipient theory of overparametrized neural networks, we show that models trained in the so-called ’active’ regime provide a statistical advantage over their associated ’lazy’ or kernel regime, leading to improved adaptivity to hidden low-dimensional structure in the data distribution, as already observed in supervised learning. Our study covers both the maximum likelihood and Stein Discrepancy estimators, and we validate our theoretical results with numerical experiments on synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-domingo-enrich21a, title = {On Energy-Based Models with Overparametrized Shallow Neural Networks}, author = {Domingo-Enrich, Carles and Bietti, Alberto and Vanden-Eijnden, Eric and Bruna, Joan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2771--2782}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/domingo-enrich21a/domingo-enrich21a.pdf}, url = {https://proceedings.mlr.press/v139/domingo-enrich21a.html}, abstract = {Energy-based models (EBMs) are a simple yet powerful framework for generative modeling. They are based on a trainable energy function which defines an associated Gibbs measure, and they can be trained and sampled from via well-established statistical tools, such as MCMC. Neural networks may be used as energy function approximators, providing both a rich class of expressive models as well as a flexible device to incorporate data structure. In this work we focus on shallow neural networks. Building from the incipient theory of overparametrized neural networks, we show that models trained in the so-called ’active’ regime provide a statistical advantage over their associated ’lazy’ or kernel regime, leading to improved adaptivity to hidden low-dimensional structure in the data distribution, as already observed in supervised learning. Our study covers both the maximum likelihood and Stein Discrepancy estimators, and we validate our theoretical results with numerical experiments on synthetic data.} }
Endnote
%0 Conference Paper %T On Energy-Based Models with Overparametrized Shallow Neural Networks %A Carles Domingo-Enrich %A Alberto Bietti %A Eric Vanden-Eijnden %A Joan Bruna %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-domingo-enrich21a %I PMLR %P 2771--2782 %U https://proceedings.mlr.press/v139/domingo-enrich21a.html %V 139 %X Energy-based models (EBMs) are a simple yet powerful framework for generative modeling. They are based on a trainable energy function which defines an associated Gibbs measure, and they can be trained and sampled from via well-established statistical tools, such as MCMC. Neural networks may be used as energy function approximators, providing both a rich class of expressive models as well as a flexible device to incorporate data structure. In this work we focus on shallow neural networks. Building from the incipient theory of overparametrized neural networks, we show that models trained in the so-called ’active’ regime provide a statistical advantage over their associated ’lazy’ or kernel regime, leading to improved adaptivity to hidden low-dimensional structure in the data distribution, as already observed in supervised learning. Our study covers both the maximum likelihood and Stein Discrepancy estimators, and we validate our theoretical results with numerical experiments on synthetic data.
APA
Domingo-Enrich, C., Bietti, A., Vanden-Eijnden, E. & Bruna, J.. (2021). On Energy-Based Models with Overparametrized Shallow Neural Networks. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2771-2782 Available from https://proceedings.mlr.press/v139/domingo-enrich21a.html.

Related Material