Variational Annealing of GANs: A Langevin Perspective

Chenyang Tao, Shuyang Dai, Liqun Chen, Ke Bai, Junya Chen, Chang Liu, Ruiyi Zhang, Georgiy Bobashev, Lawrence Carin Duke
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6176-6185, 2019.

Abstract

The generative adversarial network (GAN) has received considerable attention recently as a model for data synthesis, without an explicit specification of a likelihood function. There has been commensurate interest in leveraging likelihood estimates to improve GAN training. To enrich the understanding of this fast-growing yet almost exclusively heuristic-driven subject, we elucidate the theoretical roots of some of the empirical attempts to stabilize and improve GAN training with the introduction of likelihoods. We highlight new insights from variational theory of diffusion processes to derive a likelihood-based regularizing scheme for GAN training, and present a novel approach to train GANs with an unnormalized distribution instead of empirical samples. To substantiate our claims, we provide experimental evidence on how our theoretically-inspired new algorithms improve upon current practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-tao19a, title = {Variational Annealing of {GAN}s: A {L}angevin Perspective}, author = {Tao, Chenyang and Dai, Shuyang and Chen, Liqun and Bai, Ke and Chen, Junya and Liu, Chang and Zhang, Ruiyi and Bobashev, Georgiy and Duke, Lawrence Carin}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6176--6185}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/tao19a/tao19a.pdf}, url = {https://proceedings.mlr.press/v97/tao19a.html}, abstract = {The generative adversarial network (GAN) has received considerable attention recently as a model for data synthesis, without an explicit specification of a likelihood function. There has been commensurate interest in leveraging likelihood estimates to improve GAN training. To enrich the understanding of this fast-growing yet almost exclusively heuristic-driven subject, we elucidate the theoretical roots of some of the empirical attempts to stabilize and improve GAN training with the introduction of likelihoods. We highlight new insights from variational theory of diffusion processes to derive a likelihood-based regularizing scheme for GAN training, and present a novel approach to train GANs with an unnormalized distribution instead of empirical samples. To substantiate our claims, we provide experimental evidence on how our theoretically-inspired new algorithms improve upon current practice.} }
Endnote
%0 Conference Paper %T Variational Annealing of GANs: A Langevin Perspective %A Chenyang Tao %A Shuyang Dai %A Liqun Chen %A Ke Bai %A Junya Chen %A Chang Liu %A Ruiyi Zhang %A Georgiy Bobashev %A Lawrence Carin Duke %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-tao19a %I PMLR %P 6176--6185 %U https://proceedings.mlr.press/v97/tao19a.html %V 97 %X The generative adversarial network (GAN) has received considerable attention recently as a model for data synthesis, without an explicit specification of a likelihood function. There has been commensurate interest in leveraging likelihood estimates to improve GAN training. To enrich the understanding of this fast-growing yet almost exclusively heuristic-driven subject, we elucidate the theoretical roots of some of the empirical attempts to stabilize and improve GAN training with the introduction of likelihoods. We highlight new insights from variational theory of diffusion processes to derive a likelihood-based regularizing scheme for GAN training, and present a novel approach to train GANs with an unnormalized distribution instead of empirical samples. To substantiate our claims, we provide experimental evidence on how our theoretically-inspired new algorithms improve upon current practice.
APA
Tao, C., Dai, S., Chen, L., Bai, K., Chen, J., Liu, C., Zhang, R., Bobashev, G. & Duke, L.C.. (2019). Variational Annealing of GANs: A Langevin Perspective. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6176-6185 Available from https://proceedings.mlr.press/v97/tao19a.html.

Related Material