On Challenges in Unsupervised Domain Generalization

Vaasudev Narayanan, Aniket Anand Deshmukh, Urun Dogan, Vineeth N. Balasubramanian
NeurIPS 2021 Workshop on Pre-registration in Machine Learning, PMLR 181:42-58, 2022.

Abstract

Domain Generalization (DG) aims to learn a model from a labeled set of source domains which can generalize to an unseen target domain. Although an important stepping stone towards building general purpose models, the reliance of DG on labeled source data is a problem if we are to deploy scalable ML algorithms in the wild. We thus propose to study a novel and more challenging setting which shares the same goals as that of DG, but without source labels. We name this setting as Unsupervised Domain Generalization (UDG), where the objective is to learn a model from an unlabeled set of source domains that can semantically cluster images in an unseen target domain. We investigate the challenges involved in solving UDG as well as potential methods to address the same. Our experiments indicate that learning a generalizable feature representation using self-supervision is a strong baseline for UDG, even outperforming sophisticated methods explicitly designed to address domain shift and clustering.

Cite this Paper


BibTeX
@InProceedings{pmlr-v181-narayanan22a, title = {On Challenges in Unsupervised Domain Generalization}, author = {Narayanan, Vaasudev and Deshmukh, Aniket Anand and Dogan, Urun and Balasubramanian, Vineeth N.}, booktitle = {NeurIPS 2021 Workshop on Pre-registration in Machine Learning}, pages = {42--58}, year = {2022}, editor = {Albanie, Samuel and Henriques, João F. and Bertinetto, Luca and Hernández-Garcı́a, Alex and Doughty, Hazel and Varol, Gül}, volume = {181}, series = {Proceedings of Machine Learning Research}, month = {13 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v181/narayanan22a/narayanan22a.pdf}, url = {https://proceedings.mlr.press/v181/narayanan22a.html}, abstract = {Domain Generalization (DG) aims to learn a model from a labeled set of source domains which can generalize to an unseen target domain. Although an important stepping stone towards building general purpose models, the reliance of DG on labeled source data is a problem if we are to deploy scalable ML algorithms in the wild. We thus propose to study a novel and more challenging setting which shares the same goals as that of DG, but without source labels. We name this setting as Unsupervised Domain Generalization (UDG), where the objective is to learn a model from an unlabeled set of source domains that can semantically cluster images in an unseen target domain. We investigate the challenges involved in solving UDG as well as potential methods to address the same. Our experiments indicate that learning a generalizable feature representation using self-supervision is a strong baseline for UDG, even outperforming sophisticated methods explicitly designed to address domain shift and clustering.} }
Endnote
%0 Conference Paper %T On Challenges in Unsupervised Domain Generalization %A Vaasudev Narayanan %A Aniket Anand Deshmukh %A Urun Dogan %A Vineeth N. Balasubramanian %B NeurIPS 2021 Workshop on Pre-registration in Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Samuel Albanie %E João F. Henriques %E Luca Bertinetto %E Alex Hernández-Garcı́a %E Hazel Doughty %E Gül Varol %F pmlr-v181-narayanan22a %I PMLR %P 42--58 %U https://proceedings.mlr.press/v181/narayanan22a.html %V 181 %X Domain Generalization (DG) aims to learn a model from a labeled set of source domains which can generalize to an unseen target domain. Although an important stepping stone towards building general purpose models, the reliance of DG on labeled source data is a problem if we are to deploy scalable ML algorithms in the wild. We thus propose to study a novel and more challenging setting which shares the same goals as that of DG, but without source labels. We name this setting as Unsupervised Domain Generalization (UDG), where the objective is to learn a model from an unlabeled set of source domains that can semantically cluster images in an unseen target domain. We investigate the challenges involved in solving UDG as well as potential methods to address the same. Our experiments indicate that learning a generalizable feature representation using self-supervision is a strong baseline for UDG, even outperforming sophisticated methods explicitly designed to address domain shift and clustering.
APA
Narayanan, V., Deshmukh, A.A., Dogan, U. & Balasubramanian, V.N.. (2022). On Challenges in Unsupervised Domain Generalization. NeurIPS 2021 Workshop on Pre-registration in Machine Learning, in Proceedings of Machine Learning Research 181:42-58 Available from https://proceedings.mlr.press/v181/narayanan22a.html.

Related Material