Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization

Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:27114-27131, 2023.

Abstract

In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-park23d, title = {Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization}, author = {Park, Jungwuk and Han, Dong-Jun and Kim, Soyeong and Moon, Jaekyun}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {27114--27131}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/park23d/park23d.pdf}, url = {https://proceedings.mlr.press/v202/park23d.html}, abstract = {In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.} }
Endnote
%0 Conference Paper %T Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization %A Jungwuk Park %A Dong-Jun Han %A Soyeong Kim %A Jaekyun Moon %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-park23d %I PMLR %P 27114--27131 %U https://proceedings.mlr.press/v202/park23d.html %V 202 %X In domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.
APA
Park, J., Han, D., Kim, S. & Moon, J.. (2023). Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:27114-27131 Available from https://proceedings.mlr.press/v202/park23d.html.

Related Material