High-Fidelity Image Generation With Fewer Labels

Mario Lučić, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, Sylvain Gelly
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4183-4192, 2019.

Abstract

Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-lucic19a, title = {High-Fidelity Image Generation With Fewer Labels}, author = {Lu{\v{c}}i{\'c}, Mario and Tschannen, Michael and Ritter, Marvin and Zhai, Xiaohua and Bachem, Olivier and Gelly, Sylvain}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4183--4192}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/lucic19a/lucic19a.pdf}, url = {https://proceedings.mlr.press/v97/lucic19a.html}, abstract = {Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.} }
Endnote
%0 Conference Paper %T High-Fidelity Image Generation With Fewer Labels %A Mario Lučić %A Michael Tschannen %A Marvin Ritter %A Xiaohua Zhai %A Olivier Bachem %A Sylvain Gelly %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-lucic19a %I PMLR %P 4183--4192 %U https://proceedings.mlr.press/v97/lucic19a.html %V 97 %X Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform the state of the art on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the-art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.
APA
Lučić, M., Tschannen, M., Ritter, M., Zhai, X., Bachem, O. & Gelly, S.. (2019). High-Fidelity Image Generation With Fewer Labels. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4183-4192 Available from https://proceedings.mlr.press/v97/lucic19a.html.

Related Material