Training Deep Energy-Based Models with f-Divergence Minimization

Lantao Yu, Yang Song, Jiaming Song, Stefano Ermon
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10957-10967, 2020.

Abstract

Deep energy-based models (EBMs) are very flexible in distribution parametrization but computationally challenging because of the intractable partition function. They are typically trained via maximum likelihood, using contrastive divergence to approximate the gradient of the KL divergence between data and model distribution. While KL divergence has many desirable properties, other f-divergences have shown advantages in training implicit density generative models such as generative adversarial networks. In this paper, we propose a general variational framework termed f-EBM to train EBMs using any desired f-divergence. We introduce a corresponding optimization algorithm and prove its local convergence property with non-linear dynamical systems theory. Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-yu20g, title = {Training Deep Energy-Based Models with f-Divergence Minimization}, author = {Yu, Lantao and Song, Yang and Song, Jiaming and Ermon, Stefano}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10957--10967}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/yu20g/yu20g.pdf}, url = {https://proceedings.mlr.press/v119/yu20g.html}, abstract = {Deep energy-based models (EBMs) are very flexible in distribution parametrization but computationally challenging because of the intractable partition function. They are typically trained via maximum likelihood, using contrastive divergence to approximate the gradient of the KL divergence between data and model distribution. While KL divergence has many desirable properties, other f-divergences have shown advantages in training implicit density generative models such as generative adversarial networks. In this paper, we propose a general variational framework termed f-EBM to train EBMs using any desired f-divergence. We introduce a corresponding optimization algorithm and prove its local convergence property with non-linear dynamical systems theory. Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.} }
Endnote
%0 Conference Paper %T Training Deep Energy-Based Models with f-Divergence Minimization %A Lantao Yu %A Yang Song %A Jiaming Song %A Stefano Ermon %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-yu20g %I PMLR %P 10957--10967 %U https://proceedings.mlr.press/v119/yu20g.html %V 119 %X Deep energy-based models (EBMs) are very flexible in distribution parametrization but computationally challenging because of the intractable partition function. They are typically trained via maximum likelihood, using contrastive divergence to approximate the gradient of the KL divergence between data and model distribution. While KL divergence has many desirable properties, other f-divergences have shown advantages in training implicit density generative models such as generative adversarial networks. In this paper, we propose a general variational framework termed f-EBM to train EBMs using any desired f-divergence. We introduce a corresponding optimization algorithm and prove its local convergence property with non-linear dynamical systems theory. Experimental results demonstrate the superiority of f-EBM over contrastive divergence, as well as the benefits of training EBMs using f-divergences other than KL.
APA
Yu, L., Song, Y., Song, J. & Ermon, S.. (2020). Training Deep Energy-Based Models with f-Divergence Minimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10957-10967 Available from https://proceedings.mlr.press/v119/yu20g.html.

Related Material