Composite Functional Gradient Learning of Generative Adversarial Models

Rie Johnson, Tong Zhang
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2371-2379, 2018.

Abstract

This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation. It shows that with a strong discriminator, a good generator can be learned so that the KL divergence between the distributions of real data and generated data improves after each functional gradient step until it converges to zero. Based on the theory, we propose a new stable generative adversarial method. A theoretical insight into the original GAN from this new viewpoint is also provided. The experiments on image generation show the effectiveness of our new method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-johnson18a, title = {Composite Functional Gradient Learning of Generative Adversarial Models}, author = {Johnson, Rie and Zhang, Tong}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2371--2379}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/johnson18a/johnson18a.pdf}, url = {https://proceedings.mlr.press/v80/johnson18a.html}, abstract = {This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation. It shows that with a strong discriminator, a good generator can be learned so that the KL divergence between the distributions of real data and generated data improves after each functional gradient step until it converges to zero. Based on the theory, we propose a new stable generative adversarial method. A theoretical insight into the original GAN from this new viewpoint is also provided. The experiments on image generation show the effectiveness of our new method.} }
Endnote
%0 Conference Paper %T Composite Functional Gradient Learning of Generative Adversarial Models %A Rie Johnson %A Tong Zhang %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-johnson18a %I PMLR %P 2371--2379 %U https://proceedings.mlr.press/v80/johnson18a.html %V 80 %X This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation. It shows that with a strong discriminator, a good generator can be learned so that the KL divergence between the distributions of real data and generated data improves after each functional gradient step until it converges to zero. Based on the theory, we propose a new stable generative adversarial method. A theoretical insight into the original GAN from this new viewpoint is also provided. The experiments on image generation show the effectiveness of our new method.
APA
Johnson, R. & Zhang, T.. (2018). Composite Functional Gradient Learning of Generative Adversarial Models. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2371-2379 Available from https://proceedings.mlr.press/v80/johnson18a.html.

Related Material