Automated Synthetic-to-Real Generalization

Wuyang Chen, Zhiding Yu, Zhangyang Wang, Animashree Anandkumar
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1746-1756, 2020.

Abstract

Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pretrained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pretrained model, and propose a \emph{learning-to-optimize (L2O)} strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: https://github.com/NVlabs/ASG.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-chen20x, title = {Automated Synthetic-to-Real Generalization}, author = {Chen, Wuyang and Yu, Zhiding and Wang, Zhangyang and Anandkumar, Animashree}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1746--1756}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/chen20x/chen20x.pdf}, url = {https://proceedings.mlr.press/v119/chen20x.html}, abstract = {Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pretrained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pretrained model, and propose a \emph{learning-to-optimize (L2O)} strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: https://github.com/NVlabs/ASG.} }
Endnote
%0 Conference Paper %T Automated Synthetic-to-Real Generalization %A Wuyang Chen %A Zhiding Yu %A Zhangyang Wang %A Animashree Anandkumar %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-chen20x %I PMLR %P 1746--1756 %U https://proceedings.mlr.press/v119/chen20x.html %V 119 %X Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pretrained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pretrained model, and propose a \emph{learning-to-optimize (L2O)} strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: https://github.com/NVlabs/ASG.
APA
Chen, W., Yu, Z., Wang, Z. & Anandkumar, A.. (2020). Automated Synthetic-to-Real Generalization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1746-1756 Available from https://proceedings.mlr.press/v119/chen20x.html.

Related Material