Domain Generalization with Interpolation Robustness

Ragja Palakkadavath, Thanh Nguyen-Tang, Hung Le, Svetha Venkatesh, Sunil Gupta
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1039-1054, 2024.

Abstract

Domain generalization (DG) uses multiple source (training) domains to learn a model that generalizes well to unseen domains. Existing approaches to DG need more scrutiny over (i) the ability to imagine data beyond the source domains and (ii) the ability to cope with the scarcity of training data. To address these shortcomings, we propose a novel framework - \emph{interpolation robustness}, where we view each training domain as a point on a domain manifold and learn class-specific representations that are domain invariant across all interpolations between domains. We use this representation to propose a generic domain generalization approach that can be seamlessly combined with many state-of-the-art methods in DG. Through extensive experiments, we show that our approach can enhance the performance of several methods in the conventional and the limited training data setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-palakkadavath24a, title = {Domain Generalization with Interpolation Robustness}, author = {Palakkadavath, Ragja and Nguyen-Tang, Thanh and Le, Hung and Venkatesh, Svetha and Gupta, Sunil}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {1039--1054}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/palakkadavath24a/palakkadavath24a.pdf}, url = {https://proceedings.mlr.press/v222/palakkadavath24a.html}, abstract = {Domain generalization (DG) uses multiple source (training) domains to learn a model that generalizes well to unseen domains. Existing approaches to DG need more scrutiny over (i) the ability to imagine data beyond the source domains and (ii) the ability to cope with the scarcity of training data. To address these shortcomings, we propose a novel framework - \emph{interpolation robustness}, where we view each training domain as a point on a domain manifold and learn class-specific representations that are domain invariant across all interpolations between domains. We use this representation to propose a generic domain generalization approach that can be seamlessly combined with many state-of-the-art methods in DG. Through extensive experiments, we show that our approach can enhance the performance of several methods in the conventional and the limited training data setting.} }
Endnote
%0 Conference Paper %T Domain Generalization with Interpolation Robustness %A Ragja Palakkadavath %A Thanh Nguyen-Tang %A Hung Le %A Svetha Venkatesh %A Sunil Gupta %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-palakkadavath24a %I PMLR %P 1039--1054 %U https://proceedings.mlr.press/v222/palakkadavath24a.html %V 222 %X Domain generalization (DG) uses multiple source (training) domains to learn a model that generalizes well to unseen domains. Existing approaches to DG need more scrutiny over (i) the ability to imagine data beyond the source domains and (ii) the ability to cope with the scarcity of training data. To address these shortcomings, we propose a novel framework - \emph{interpolation robustness}, where we view each training domain as a point on a domain manifold and learn class-specific representations that are domain invariant across all interpolations between domains. We use this representation to propose a generic domain generalization approach that can be seamlessly combined with many state-of-the-art methods in DG. Through extensive experiments, we show that our approach can enhance the performance of several methods in the conventional and the limited training data setting.
APA
Palakkadavath, R., Nguyen-Tang, T., Le, H., Venkatesh, S. & Gupta, S.. (2024). Domain Generalization with Interpolation Robustness. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:1039-1054 Available from https://proceedings.mlr.press/v222/palakkadavath24a.html.

Related Material