Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution

Wang Lu, Wang Wang, Jindong Yidong, Xing Xie
Proceedings of The KDD'23 Workshop on Causal Discovery, Prediction and Decision, PMLR 218:75-97, 2023.

Abstract

The distribution shifts between training and test data typically undermine the performance of deep learning models. In recent years, lots of work pays attention to domain generaliza- tion (DG) where distribution shift exists and target data are unseen. Despite the progress in algorithm design, two foundational factors have long been ignored: 1) the optimization for regularization-based objectives (e.g., distribution alignment), and 2) the model selection for DG since no knowledge about the target domain can be utilized. In this paper, we pro- pose Mixup guided optimization and selection techniques for domain generalization. For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization. For model selec- tion, we generate a validation dataset with a closer distance to the target distribution, and thereby it can better represent the target data. We also present some theoretical insights behind our proposals. Comprehensive experiments on one visual classification benchmark and three time-series benchmarks demonstrate that our model optimization and selection techniques can largely improve the performance of existing domain generalization algo- rithms and even achieve new state-of-the-art results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v218-lu23a, title = {Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution}, author = {Lu, Wang and Wang, Jindong and Wang, Yidong and Xie, Xing}, booktitle = {Proceedings of The KDD'23 Workshop on Causal Discovery, Prediction and Decision}, pages = {75--97}, year = {2023}, editor = {Le, Thuc and Li, Jiuyong and Ness, Robert and Triantafillou, Sofia and Shimizu, Shohei and Cui, Peng and Kuang, Kun and Pei, Jian and Wang, Fei and Prosperi, Mattia}, volume = {218}, series = {Proceedings of Machine Learning Research}, month = {07 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v218/lu23a/lu23a.pdf}, url = {https://proceedings.mlr.press/v218/lu23a.html}, abstract = {The distribution shifts between training and test data typically undermine the performance of deep learning models. In recent years, lots of work pays attention to domain generaliza- tion (DG) where distribution shift exists and target data are unseen. Despite the progress in algorithm design, two foundational factors have long been ignored: 1) the optimization for regularization-based objectives (e.g., distribution alignment), and 2) the model selection for DG since no knowledge about the target domain can be utilized. In this paper, we pro- pose Mixup guided optimization and selection techniques for domain generalization. For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization. For model selec- tion, we generate a validation dataset with a closer distance to the target distribution, and thereby it can better represent the target data. We also present some theoretical insights behind our proposals. Comprehensive experiments on one visual classification benchmark and three time-series benchmarks demonstrate that our model optimization and selection techniques can largely improve the performance of existing domain generalization algo- rithms and even achieve new state-of-the-art results.} }
Endnote
%0 Conference Paper %T Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution %A Wang Lu %A Wang Wang %A Jindong Yidong %A Xing Xie %B Proceedings of The KDD'23 Workshop on Causal Discovery, Prediction and Decision %C Proceedings of Machine Learning Research %D 2023 %E Thuc Le %E Jiuyong Li %E Robert Ness %E Sofia Triantafillou %E Shohei Shimizu %E Peng Cui %E Kun Kuang %E Jian Pei %E Fei Wang %E Mattia Prosperi %F pmlr-v218-lu23a %I PMLR %P 75--97 %U https://proceedings.mlr.press/v218/lu23a.html %V 218 %X The distribution shifts between training and test data typically undermine the performance of deep learning models. In recent years, lots of work pays attention to domain generaliza- tion (DG) where distribution shift exists and target data are unseen. Despite the progress in algorithm design, two foundational factors have long been ignored: 1) the optimization for regularization-based objectives (e.g., distribution alignment), and 2) the model selection for DG since no knowledge about the target domain can be utilized. In this paper, we pro- pose Mixup guided optimization and selection techniques for domain generalization. For optimization, we utilize an adapted Mixup to generate an out-of-distribution dataset that can guide the preference direction and optimize with Pareto optimization. For model selec- tion, we generate a validation dataset with a closer distance to the target distribution, and thereby it can better represent the target data. We also present some theoretical insights behind our proposals. Comprehensive experiments on one visual classification benchmark and three time-series benchmarks demonstrate that our model optimization and selection techniques can largely improve the performance of existing domain generalization algo- rithms and even achieve new state-of-the-art results.
APA
Lu, W., Wang, W., Yidong, J. & Xie, X.. (2023). Towards Optimization and Model Selection for Domain Generalization: A Mixup-guided Solution. Proceedings of The KDD'23 Workshop on Causal Discovery, Prediction and Decision, in Proceedings of Machine Learning Research 218:75-97 Available from https://proceedings.mlr.press/v218/lu23a.html.

Related Material