Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation

Hung-Chieh Fang, Po-Yi Lu, Hsuan-Tien Lin
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:16040-16057, 2025.

Abstract

Universal Domain Adaptation (UniDA) addresses unsupervised domain adaptation where target classes may differ arbitrarily from source ones, except for a shared subset. A widely used approach, partial domain matching (PDM), aligns only shared classes but struggles in extreme cases where many source classes are absent in the target domain, underperforming the most naive baseline that trains on only source data. In this work, we identify that the failure of PDM for extreme UniDA stems from dimensional collapse (DC) in target representations. To address target DC, we propose to jointly leverage the alignment and uniformity techniques in self-supervised learning on the unlabeled target data to preserve the intrinsic structure of the learned representations. Our experimental results confirm that SSL consistently advances PDM and delivers new state-of-the-art results across a broader benchmark of UniDA scenarios with different portions of shared classes, representing a crucial step toward truly comprehensive UniDA. Project page: https://dc-unida.github.io/

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-fang25i, title = {Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation}, author = {Fang, Hung-Chieh and Lu, Po-Yi and Lin, Hsuan-Tien}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {16040--16057}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/fang25i/fang25i.pdf}, url = {https://proceedings.mlr.press/v267/fang25i.html}, abstract = {Universal Domain Adaptation (UniDA) addresses unsupervised domain adaptation where target classes may differ arbitrarily from source ones, except for a shared subset. A widely used approach, partial domain matching (PDM), aligns only shared classes but struggles in extreme cases where many source classes are absent in the target domain, underperforming the most naive baseline that trains on only source data. In this work, we identify that the failure of PDM for extreme UniDA stems from dimensional collapse (DC) in target representations. To address target DC, we propose to jointly leverage the alignment and uniformity techniques in self-supervised learning on the unlabeled target data to preserve the intrinsic structure of the learned representations. Our experimental results confirm that SSL consistently advances PDM and delivers new state-of-the-art results across a broader benchmark of UniDA scenarios with different portions of shared classes, representing a crucial step toward truly comprehensive UniDA. Project page: https://dc-unida.github.io/} }
Endnote
%0 Conference Paper %T Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation %A Hung-Chieh Fang %A Po-Yi Lu %A Hsuan-Tien Lin %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-fang25i %I PMLR %P 16040--16057 %U https://proceedings.mlr.press/v267/fang25i.html %V 267 %X Universal Domain Adaptation (UniDA) addresses unsupervised domain adaptation where target classes may differ arbitrarily from source ones, except for a shared subset. A widely used approach, partial domain matching (PDM), aligns only shared classes but struggles in extreme cases where many source classes are absent in the target domain, underperforming the most naive baseline that trains on only source data. In this work, we identify that the failure of PDM for extreme UniDA stems from dimensional collapse (DC) in target representations. To address target DC, we propose to jointly leverage the alignment and uniformity techniques in self-supervised learning on the unlabeled target data to preserve the intrinsic structure of the learned representations. Our experimental results confirm that SSL consistently advances PDM and delivers new state-of-the-art results across a broader benchmark of UniDA scenarios with different portions of shared classes, representing a crucial step toward truly comprehensive UniDA. Project page: https://dc-unida.github.io/
APA
Fang, H., Lu, P. & Lin, H.. (2025). Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:16040-16057 Available from https://proceedings.mlr.press/v267/fang25i.html.

Related Material