[edit]
A study of quality and diversity in K+1 GANs
Proceedings on "I Can't Believe It's Not Better!" at NeurIPS Workshops, PMLR 137:129-135, 2020.
Abstract
We study the $K+1$ GAN paradigm which generalizes the canonical true/fake GAN by training a generator with a $K+1$-ary classifier instead of a binary discriminator. We show how the standard formulation of the $K+1$ GAN does not take advantage of class information fully and show how its learned generative data distribution is no different than the distribution that a traditional binary GAN learns. We then investigate another GAN loss function that dynamically labels its data during training, and show how this leads to learning a generative distribution that emphasizes the target distribution modes. We investigate to what degree our theoretical expectations of these GAN training strategies have impact on the quality and diversity of learned generators on real-world data.