A study of quality and diversity in K+1 GANs

Ilya Kavalerov, Wojciech Czaja, Rama Chellappa
Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops, PMLR 137:129-135, 2020.

Abstract

We study the $K+1$ GAN paradigm which generalizes the canonical true/fake GAN by training a generator with a $K+1$-ary classifier instead of a binary discriminator. We show how the standard formulation of the $K+1$ GAN does not take advantage of class information fully and show how its learned generative data distribution is no different than the distribution that a traditional binary GAN learns. We then investigate another GAN loss function that dynamically labels its data during training, and show how this leads to learning a generative distribution that emphasizes the target distribution modes. We investigate to what degree our theoretical expectations of these GAN training strategies have impact on the quality and diversity of learned generators on real-world data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v137-kavalerov20a, title = {A study of quality and diversity in {K+1} {GANs}}, author = {Kavalerov, Ilya and Czaja, Wojciech and Chellappa, Rama}, booktitle = {Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops}, pages = {129--135}, year = {2020}, editor = {Zosa Forde, Jessica and Ruiz, Francisco and Pradier, Melanie F. and Schein, Aaron}, volume = {137}, series = {Proceedings of Machine Learning Research}, month = {12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v137/kavalerov20a/kavalerov20a.pdf}, url = {https://proceedings.mlr.press/v137/kavalerov20a.html}, abstract = {We study the $K+1$ GAN paradigm which generalizes the canonical true/fake GAN by training a generator with a $K+1$-ary classifier instead of a binary discriminator. We show how the standard formulation of the $K+1$ GAN does not take advantage of class information fully and show how its learned generative data distribution is no different than the distribution that a traditional binary GAN learns. We then investigate another GAN loss function that dynamically labels its data during training, and show how this leads to learning a generative distribution that emphasizes the target distribution modes. We investigate to what degree our theoretical expectations of these GAN training strategies have impact on the quality and diversity of learned generators on real-world data.} }
Endnote
%0 Conference Paper %T A study of quality and diversity in K+1 GANs %A Ilya Kavalerov %A Wojciech Czaja %A Rama Chellappa %B Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops %C Proceedings of Machine Learning Research %D 2020 %E Jessica Zosa Forde %E Francisco Ruiz %E Melanie F. Pradier %E Aaron Schein %F pmlr-v137-kavalerov20a %I PMLR %P 129--135 %U https://proceedings.mlr.press/v137/kavalerov20a.html %V 137 %X We study the $K+1$ GAN paradigm which generalizes the canonical true/fake GAN by training a generator with a $K+1$-ary classifier instead of a binary discriminator. We show how the standard formulation of the $K+1$ GAN does not take advantage of class information fully and show how its learned generative data distribution is no different than the distribution that a traditional binary GAN learns. We then investigate another GAN loss function that dynamically labels its data during training, and show how this leads to learning a generative distribution that emphasizes the target distribution modes. We investigate to what degree our theoretical expectations of these GAN training strategies have impact on the quality and diversity of learned generators on real-world data.
APA
Kavalerov, I., Czaja, W. & Chellappa, R.. (2020). A study of quality and diversity in K+1 GANs. Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops, in Proceedings of Machine Learning Research 137:129-135 Available from https://proceedings.mlr.press/v137/kavalerov20a.html.

Related Material