Mixed batches and symmetric discriminators for GAN training

Thomas LUCAS, Corentin Tallec, Yann Ollivier, Jakob Verbeek
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2844-2853, 2018.

Abstract

Generative adversarial networks (GANs) are pow- erful generative models based on providing feed- back to a generative network via a discriminator network. However, the discriminator usually as- sesses individual samples. This prevents the dis- criminator from accessing global distributional statistics of generated samples, and often leads to mode dropping: the generator models only part of the target distribution. We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch. The latter score does not depend on the order of samples in a batch. Rather than learning this invariance, we introduce a generic permutation-invariant discriminator ar- chitecture. This architecture is provably a uni- versal approximator of all symmetric functions. Experimentally, our approach reduces mode col- lapse in GANs on two synthetic datasets, and obtains good results on the CIFAR10 and CelebA datasets, both qualitatively and quantitatively.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-lucas18a, title = {Mixed batches and symmetric discriminators for {GAN} training}, author = {LUCAS, Thomas and Tallec, Corentin and Ollivier, Yann and Verbeek, Jakob}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2844--2853}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/lucas18a/lucas18a.pdf}, url = {https://proceedings.mlr.press/v80/lucas18a.html}, abstract = {Generative adversarial networks (GANs) are pow- erful generative models based on providing feed- back to a generative network via a discriminator network. However, the discriminator usually as- sesses individual samples. This prevents the dis- criminator from accessing global distributional statistics of generated samples, and often leads to mode dropping: the generator models only part of the target distribution. We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch. The latter score does not depend on the order of samples in a batch. Rather than learning this invariance, we introduce a generic permutation-invariant discriminator ar- chitecture. This architecture is provably a uni- versal approximator of all symmetric functions. Experimentally, our approach reduces mode col- lapse in GANs on two synthetic datasets, and obtains good results on the CIFAR10 and CelebA datasets, both qualitatively and quantitatively.} }
Endnote
%0 Conference Paper %T Mixed batches and symmetric discriminators for GAN training %A Thomas LUCAS %A Corentin Tallec %A Yann Ollivier %A Jakob Verbeek %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-lucas18a %I PMLR %P 2844--2853 %U https://proceedings.mlr.press/v80/lucas18a.html %V 80 %X Generative adversarial networks (GANs) are pow- erful generative models based on providing feed- back to a generative network via a discriminator network. However, the discriminator usually as- sesses individual samples. This prevents the dis- criminator from accessing global distributional statistics of generated samples, and often leads to mode dropping: the generator models only part of the target distribution. We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch. The latter score does not depend on the order of samples in a batch. Rather than learning this invariance, we introduce a generic permutation-invariant discriminator ar- chitecture. This architecture is provably a uni- versal approximator of all symmetric functions. Experimentally, our approach reduces mode col- lapse in GANs on two synthetic datasets, and obtains good results on the CIFAR10 and CelebA datasets, both qualitatively and quantitatively.
APA
LUCAS, T., Tallec, C., Ollivier, Y. & Verbeek, J.. (2018). Mixed batches and symmetric discriminators for GAN training. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2844-2853 Available from https://proceedings.mlr.press/v80/lucas18a.html.

Related Material