Improved Training of Generative Adversarial Networks Using Representative Features

Duhyeon Bang, Hyunjung Shim
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:433-442, 2018.

Abstract

Despite the success of generative adversarial networks (GANs) for image generation, the trade-off between visual quality and image diversity remains a significant issue. This paper achieves both aims simultaneously by improving the stability of training GANs. The key idea of the proposed approach is to implicitly regularize the discriminator using representative features. Focusing on the fact that standard GAN minimizes reverse Kullback-Leibler (KL) divergence, we transfer the representative feature, which is extracted from the data distribution using a pre-trained autoencoder (AE), to the discriminator of standard GANs. Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence. Consequently, the proposed approach is verified to improve visual quality and diversity of state of the art GANs using extensive evaluations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-bang18a, title = {Improved Training of Generative Adversarial Networks Using Representative Features}, author = {Bang, Duhyeon and Shim, Hyunjung}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {433--442}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/bang18a/bang18a.pdf}, url = {https://proceedings.mlr.press/v80/bang18a.html}, abstract = {Despite the success of generative adversarial networks (GANs) for image generation, the trade-off between visual quality and image diversity remains a significant issue. This paper achieves both aims simultaneously by improving the stability of training GANs. The key idea of the proposed approach is to implicitly regularize the discriminator using representative features. Focusing on the fact that standard GAN minimizes reverse Kullback-Leibler (KL) divergence, we transfer the representative feature, which is extracted from the data distribution using a pre-trained autoencoder (AE), to the discriminator of standard GANs. Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence. Consequently, the proposed approach is verified to improve visual quality and diversity of state of the art GANs using extensive evaluations.} }
Endnote
%0 Conference Paper %T Improved Training of Generative Adversarial Networks Using Representative Features %A Duhyeon Bang %A Hyunjung Shim %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-bang18a %I PMLR %P 433--442 %U https://proceedings.mlr.press/v80/bang18a.html %V 80 %X Despite the success of generative adversarial networks (GANs) for image generation, the trade-off between visual quality and image diversity remains a significant issue. This paper achieves both aims simultaneously by improving the stability of training GANs. The key idea of the proposed approach is to implicitly regularize the discriminator using representative features. Focusing on the fact that standard GAN minimizes reverse Kullback-Leibler (KL) divergence, we transfer the representative feature, which is extracted from the data distribution using a pre-trained autoencoder (AE), to the discriminator of standard GANs. Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence. Consequently, the proposed approach is verified to improve visual quality and diversity of state of the art GANs using extensive evaluations.
APA
Bang, D. & Shim, H.. (2018). Improved Training of Generative Adversarial Networks Using Representative Features. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:433-442 Available from https://proceedings.mlr.press/v80/bang18a.html.

Related Material