Explaining the effects of non-convergent MCMC in the training of Energy-Based Models

Elisabeth Agoritsas, Giovanni Catania, Aurélien Decelle, Beatriz Seoane
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:322-336, 2023.

Abstract

In this paper, we quantify the impact of using non-convergent Markov chains to train Energy-Based models (EBMs). In particular, we show analytically that EBMs trained with non-persistent short runs to estimate the gradient can perfectly reproduce a set of empirical statistics of the data, not at the level of the equilibrium measure, but through a precise dynamical process. Our results provide a first-principles explanation for the observations of recent works proposing the strategy of using short runs starting from random initial conditions as an efficient way to generate high-quality samples in EBMs, and lay the groundwork for using EBMs as diffusion models. After explaining this effect in generic EBMs, we analyze two solvable models in which the effect of the non-convergent sampling in the trained parameters can be described in detail. Finally, we test these predictions numerically on a ConvNet EBM and a Boltzmann machine.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-agoritsas23a, title = {Explaining the effects of non-convergent {MCMC} in the training of Energy-Based Models}, author = {Agoritsas, Elisabeth and Catania, Giovanni and Decelle, Aur\'{e}lien and Seoane, Beatriz}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {322--336}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/agoritsas23a/agoritsas23a.pdf}, url = {https://proceedings.mlr.press/v202/agoritsas23a.html}, abstract = {In this paper, we quantify the impact of using non-convergent Markov chains to train Energy-Based models (EBMs). In particular, we show analytically that EBMs trained with non-persistent short runs to estimate the gradient can perfectly reproduce a set of empirical statistics of the data, not at the level of the equilibrium measure, but through a precise dynamical process. Our results provide a first-principles explanation for the observations of recent works proposing the strategy of using short runs starting from random initial conditions as an efficient way to generate high-quality samples in EBMs, and lay the groundwork for using EBMs as diffusion models. After explaining this effect in generic EBMs, we analyze two solvable models in which the effect of the non-convergent sampling in the trained parameters can be described in detail. Finally, we test these predictions numerically on a ConvNet EBM and a Boltzmann machine.} }
Endnote
%0 Conference Paper %T Explaining the effects of non-convergent MCMC in the training of Energy-Based Models %A Elisabeth Agoritsas %A Giovanni Catania %A Aurélien Decelle %A Beatriz Seoane %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-agoritsas23a %I PMLR %P 322--336 %U https://proceedings.mlr.press/v202/agoritsas23a.html %V 202 %X In this paper, we quantify the impact of using non-convergent Markov chains to train Energy-Based models (EBMs). In particular, we show analytically that EBMs trained with non-persistent short runs to estimate the gradient can perfectly reproduce a set of empirical statistics of the data, not at the level of the equilibrium measure, but through a precise dynamical process. Our results provide a first-principles explanation for the observations of recent works proposing the strategy of using short runs starting from random initial conditions as an efficient way to generate high-quality samples in EBMs, and lay the groundwork for using EBMs as diffusion models. After explaining this effect in generic EBMs, we analyze two solvable models in which the effect of the non-convergent sampling in the trained parameters can be described in detail. Finally, we test these predictions numerically on a ConvNet EBM and a Boltzmann machine.
APA
Agoritsas, E., Catania, G., Decelle, A. & Seoane, B.. (2023). Explaining the effects of non-convergent MCMC in the training of Energy-Based Models. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:322-336 Available from https://proceedings.mlr.press/v202/agoritsas23a.html.

Related Material