Moderately Distributional Exploration for Domain Generalization

Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:6786-6817, 2023.

Abstract

Domain generalization (DG) aims to tackle the distribution shift between training domains and unknown target domains. Generating new domains is one of the most effective approaches, yet its performance gain depends on the distribution discrepancy between the generated and target domains. Distributionally robust optimization is promising to tackle distribution discrepancy by exploring domains in an uncertainty set. However, the uncertainty set may be overwhelmingly large, leading to low-confidence prediction in DG. It is because a large uncertainty set could introduce domains containing semantically different factors from training domains. To address this issue, we propose to perform a $\textit{mo}$derately $\textit{d}$istributional $\textit{e}$xploration (MODE) for domain generalization. Specifically, MODE performs distribution exploration in an uncertainty $\textit{subset}$ that shares the same semantic factors with the training domains. We show that MODE can endow models with provable generalization performance on unknown target domains. The experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-dai23d, title = {Moderately Distributional Exploration for Domain Generalization}, author = {Dai, Rui and Zhang, Yonggang and Fang, Zhen and Han, Bo and Tian, Xinmei}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {6786--6817}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/dai23d/dai23d.pdf}, url = {https://proceedings.mlr.press/v202/dai23d.html}, abstract = {Domain generalization (DG) aims to tackle the distribution shift between training domains and unknown target domains. Generating new domains is one of the most effective approaches, yet its performance gain depends on the distribution discrepancy between the generated and target domains. Distributionally robust optimization is promising to tackle distribution discrepancy by exploring domains in an uncertainty set. However, the uncertainty set may be overwhelmingly large, leading to low-confidence prediction in DG. It is because a large uncertainty set could introduce domains containing semantically different factors from training domains. To address this issue, we propose to perform a $\textit{mo}$derately $\textit{d}$istributional $\textit{e}$xploration (MODE) for domain generalization. Specifically, MODE performs distribution exploration in an uncertainty $\textit{subset}$ that shares the same semantic factors with the training domains. We show that MODE can endow models with provable generalization performance on unknown target domains. The experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.} }
Endnote
%0 Conference Paper %T Moderately Distributional Exploration for Domain Generalization %A Rui Dai %A Yonggang Zhang %A Zhen Fang %A Bo Han %A Xinmei Tian %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-dai23d %I PMLR %P 6786--6817 %U https://proceedings.mlr.press/v202/dai23d.html %V 202 %X Domain generalization (DG) aims to tackle the distribution shift between training domains and unknown target domains. Generating new domains is one of the most effective approaches, yet its performance gain depends on the distribution discrepancy between the generated and target domains. Distributionally robust optimization is promising to tackle distribution discrepancy by exploring domains in an uncertainty set. However, the uncertainty set may be overwhelmingly large, leading to low-confidence prediction in DG. It is because a large uncertainty set could introduce domains containing semantically different factors from training domains. To address this issue, we propose to perform a $\textit{mo}$derately $\textit{d}$istributional $\textit{e}$xploration (MODE) for domain generalization. Specifically, MODE performs distribution exploration in an uncertainty $\textit{subset}$ that shares the same semantic factors with the training domains. We show that MODE can endow models with provable generalization performance on unknown target domains. The experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.
APA
Dai, R., Zhang, Y., Fang, Z., Han, B. & Tian, X.. (2023). Moderately Distributional Exploration for Domain Generalization. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:6786-6817 Available from https://proceedings.mlr.press/v202/dai23d.html.

Related Material