Tempered Adversarial Networks

Mehdi S. M. Sajjadi, Giambattista Parascandolo, Arash Mehrjou, Bernhard Schölkopf
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4451-4459, 2018.

Abstract

Generative adversarial networks (GANs) have been shown to produce realistic samples from high-dimensional distributions, but training them is considered hard. A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset. We propose a simple modification that gives the generator control over the real samples which leads to a tempered learning process for both generator and discriminator. The real data distribution passes through a lens before being revealed to the discriminator, balancing the generator and discriminator by gradually revealing more detailed features necessary to produce high-quality results. The proposed module automatically adjusts the learning process to the current strength of the networks, yet is generic and easy to add to any GAN variant. In a number of experiments, we show that this can improve quality, stability and/or convergence speed across a range of different GAN architectures (DCGAN, LSGAN, WGAN-GP).

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-sajjadi18a, title = {Tempered Adversarial Networks}, author = {Sajjadi, Mehdi S. M. and Parascandolo, Giambattista and Mehrjou, Arash and Sch{\"o}lkopf, Bernhard}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4451--4459}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/sajjadi18a/sajjadi18a.pdf}, url = {https://proceedings.mlr.press/v80/sajjadi18a.html}, abstract = {Generative adversarial networks (GANs) have been shown to produce realistic samples from high-dimensional distributions, but training them is considered hard. A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset. We propose a simple modification that gives the generator control over the real samples which leads to a tempered learning process for both generator and discriminator. The real data distribution passes through a lens before being revealed to the discriminator, balancing the generator and discriminator by gradually revealing more detailed features necessary to produce high-quality results. The proposed module automatically adjusts the learning process to the current strength of the networks, yet is generic and easy to add to any GAN variant. In a number of experiments, we show that this can improve quality, stability and/or convergence speed across a range of different GAN architectures (DCGAN, LSGAN, WGAN-GP).} }
Endnote
%0 Conference Paper %T Tempered Adversarial Networks %A Mehdi S. M. Sajjadi %A Giambattista Parascandolo %A Arash Mehrjou %A Bernhard Schölkopf %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-sajjadi18a %I PMLR %P 4451--4459 %U https://proceedings.mlr.press/v80/sajjadi18a.html %V 80 %X Generative adversarial networks (GANs) have been shown to produce realistic samples from high-dimensional distributions, but training them is considered hard. A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset. We propose a simple modification that gives the generator control over the real samples which leads to a tempered learning process for both generator and discriminator. The real data distribution passes through a lens before being revealed to the discriminator, balancing the generator and discriminator by gradually revealing more detailed features necessary to produce high-quality results. The proposed module automatically adjusts the learning process to the current strength of the networks, yet is generic and easy to add to any GAN variant. In a number of experiments, we show that this can improve quality, stability and/or convergence speed across a range of different GAN architectures (DCGAN, LSGAN, WGAN-GP).
APA
Sajjadi, M.S.M., Parascandolo, G., Mehrjou, A. & Schölkopf, B.. (2018). Tempered Adversarial Networks. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4451-4459 Available from https://proceedings.mlr.press/v80/sajjadi18a.html.

Related Material