Refining Synthetic Images with Semantic Layouts by Adversarial Training

Tongtong Zhao, Yuxiao Yan, JinJia Peng, HaoHui Wei, Xianping Fu
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:863-878, 2018.

Abstract

Recently, progress in learning-by-synthesis has proposed training models on synthetic images, which can effectively reduce the cost of manpower and material resources. However, learning from synthetic images still fails to achieve the desired performance compared to naturalistic images due to the different distribution of synthetic images. In an attempt to address this issue, previous methods were to improve the realism of synthetic images by learning a model. However, the disadvantage of the method is that the distortion has not been improved and the authenticity level is unstable. To solve this problem, we put forward a new structure to improve synthetic images, via the reference to the idea of style transformation, through which we can efficiently reduce the distortion of pictures and minimize the need of real data annotation. We estimate that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on various datasets including MPIIGaze dataset.

Cite this Paper


BibTeX
@InProceedings{pmlr-v95-zhao18a, title = {Refining Synthetic Images with Semantic Layouts by Adversarial Training}, author = {Zhao, Tongtong and Yan, Yuxiao and Peng, JinJia and Wei, HaoHui and Fu, Xianping}, booktitle = {Proceedings of The 10th Asian Conference on Machine Learning}, pages = {863--878}, year = {2018}, editor = {Zhu, Jun and Takeuchi, Ichiro}, volume = {95}, series = {Proceedings of Machine Learning Research}, month = {14--16 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v95/zhao18a/zhao18a.pdf}, url = {https://proceedings.mlr.press/v95/zhao18a.html}, abstract = {Recently, progress in learning-by-synthesis has proposed training models on synthetic images, which can effectively reduce the cost of manpower and material resources. However, learning from synthetic images still fails to achieve the desired performance compared to naturalistic images due to the different distribution of synthetic images. In an attempt to address this issue, previous methods were to improve the realism of synthetic images by learning a model. However, the disadvantage of the method is that the distortion has not been improved and the authenticity level is unstable. To solve this problem, we put forward a new structure to improve synthetic images, via the reference to the idea of style transformation, through which we can efficiently reduce the distortion of pictures and minimize the need of real data annotation. We estimate that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on various datasets including MPIIGaze dataset.} }
Endnote
%0 Conference Paper %T Refining Synthetic Images with Semantic Layouts by Adversarial Training %A Tongtong Zhao %A Yuxiao Yan %A JinJia Peng %A HaoHui Wei %A Xianping Fu %B Proceedings of The 10th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jun Zhu %E Ichiro Takeuchi %F pmlr-v95-zhao18a %I PMLR %P 863--878 %U https://proceedings.mlr.press/v95/zhao18a.html %V 95 %X Recently, progress in learning-by-synthesis has proposed training models on synthetic images, which can effectively reduce the cost of manpower and material resources. However, learning from synthetic images still fails to achieve the desired performance compared to naturalistic images due to the different distribution of synthetic images. In an attempt to address this issue, previous methods were to improve the realism of synthetic images by learning a model. However, the disadvantage of the method is that the distortion has not been improved and the authenticity level is unstable. To solve this problem, we put forward a new structure to improve synthetic images, via the reference to the idea of style transformation, through which we can efficiently reduce the distortion of pictures and minimize the need of real data annotation. We estimate that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on various datasets including MPIIGaze dataset.
APA
Zhao, T., Yan, Y., Peng, J., Wei, H. & Fu, X.. (2018). Refining Synthetic Images with Semantic Layouts by Adversarial Training. Proceedings of The 10th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 95:863-878 Available from https://proceedings.mlr.press/v95/zhao18a.html.

Related Material