On Leveraging Pretrained GANs for Generation with Limited Data

Miaoyun Zhao, Yulai Cong, Lawrence Carin
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11340-11351, 2020.

Abstract

Recent work has shown generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images. Most images so generated are not contained in the training dataset, suggesting potential for augmenting training sets with GAN-generated data. While this scenario is of particular relevance when there are limited data available, there is still the issue of training the GAN itself based on that limited data. To facilitate this, we leverage existing GAN models pretrained on large-scale datasets (like ImageNet) to introduce additional knowledge (which may not exist within the limited data), following the concept of transfer learning. Demonstrated by natural-image generation, we reveal that low-level filters (those close to observations) of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data. To further adapt the transferred filters to the target domain, we propose adaptive filter modulation (AdaFM). An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhao20a, title = {On Leveraging Pretrained {GAN}s for Generation with Limited Data}, author = {Zhao, Miaoyun and Cong, Yulai and Carin, Lawrence}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11340--11351}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhao20a/zhao20a.pdf}, url = {https://proceedings.mlr.press/v119/zhao20a.html}, abstract = {Recent work has shown generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images. Most images so generated are not contained in the training dataset, suggesting potential for augmenting training sets with GAN-generated data. While this scenario is of particular relevance when there are limited data available, there is still the issue of training the GAN itself based on that limited data. To facilitate this, we leverage existing GAN models pretrained on large-scale datasets (like ImageNet) to introduce additional knowledge (which may not exist within the limited data), following the concept of transfer learning. Demonstrated by natural-image generation, we reveal that low-level filters (those close to observations) of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data. To further adapt the transferred filters to the target domain, we propose adaptive filter modulation (AdaFM). An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.} }
Endnote
%0 Conference Paper %T On Leveraging Pretrained GANs for Generation with Limited Data %A Miaoyun Zhao %A Yulai Cong %A Lawrence Carin %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhao20a %I PMLR %P 11340--11351 %U https://proceedings.mlr.press/v119/zhao20a.html %V 119 %X Recent work has shown generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images. Most images so generated are not contained in the training dataset, suggesting potential for augmenting training sets with GAN-generated data. While this scenario is of particular relevance when there are limited data available, there is still the issue of training the GAN itself based on that limited data. To facilitate this, we leverage existing GAN models pretrained on large-scale datasets (like ImageNet) to introduce additional knowledge (which may not exist within the limited data), following the concept of transfer learning. Demonstrated by natural-image generation, we reveal that low-level filters (those close to observations) of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data. To further adapt the transferred filters to the target domain, we propose adaptive filter modulation (AdaFM). An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
APA
Zhao, M., Cong, Y. & Carin, L.. (2020). On Leveraging Pretrained GANs for Generation with Limited Data. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11340-11351 Available from https://proceedings.mlr.press/v119/zhao20a.html.

Related Material