SNAIL: Semi-Separated Uncertainty Adversarial Learning for Universal Domain Adaptation

Zhongyi Han, Wan Su, Rundong He, Yilong Yin
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:436-451, 2023.

Abstract

Universal domain adaptation (UniDA) is a new sub-topic of unsupervised domain adaptation. It handles the problem that the source or target domain possibly has open-class samples. The inborn challenge is to detect the open-class samples in the test phase. Pioneering studies could be viewed as dependent-detector-based methods. They cleverly design efficient uncertainty metrics (\eg, confidence, entropy, distance) based on the outputs of domain adaptation models (predictor) to detect open-class samples. However, they have a pain point in setting extremely-sensitive and task-dependent thresholds on the uncertainty metrics to filter open-class samples. To bypass this pain point, we propose a semi-separated-detector-based method, Semi-Separated Uncertainty Adversarial Learning (SNAIL). We build a semi-separated uncertainty decision-maker to enable sensitive-threshold-free detection. It receives multiple uncertainty metrics as attributes and separately learns the thresholds of uncertainty metrics in a multi-level decision rule. For some challenging tasks, the uncertainty margins between common and open classes are subtle, leading to difficulty learning optimal decision rules. We present the uncertainty separation loss to enlarge the uncertainty margin. Further, forcibly aligning the distributions could incorrectly align the open classes to common classes. Thanks to the open-class detection strategy, we design the conditional-weighted adversarial loss that adversarially and selectively matches the feature distributions to defeat the distribution misalignment problem. Extensive experiments show that SNAIL remarkably outperforms the state-of-the-art domain adaptation methods, with over 25% improvements in open-class detection accuracy for some tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-han23a, title = {SNAIL: Semi-Separated Uncertainty Adversarial Learning for Universal Domain Adaptation}, author = {Han, Zhongyi and Su, Wan and He, Rundong and Yin, Yilong}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {436--451}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/han23a/han23a.pdf}, url = {https://proceedings.mlr.press/v189/han23a.html}, abstract = { Universal domain adaptation (UniDA) is a new sub-topic of unsupervised domain adaptation. It handles the problem that the source or target domain possibly has open-class samples. The inborn challenge is to detect the open-class samples in the test phase. Pioneering studies could be viewed as dependent-detector-based methods. They cleverly design efficient uncertainty metrics (\eg, confidence, entropy, distance) based on the outputs of domain adaptation models (predictor) to detect open-class samples. However, they have a pain point in setting extremely-sensitive and task-dependent thresholds on the uncertainty metrics to filter open-class samples. To bypass this pain point, we propose a semi-separated-detector-based method, Semi-Separated Uncertainty Adversarial Learning (SNAIL). We build a semi-separated uncertainty decision-maker to enable sensitive-threshold-free detection. It receives multiple uncertainty metrics as attributes and separately learns the thresholds of uncertainty metrics in a multi-level decision rule. For some challenging tasks, the uncertainty margins between common and open classes are subtle, leading to difficulty learning optimal decision rules. We present the uncertainty separation loss to enlarge the uncertainty margin. Further, forcibly aligning the distributions could incorrectly align the open classes to common classes. Thanks to the open-class detection strategy, we design the conditional-weighted adversarial loss that adversarially and selectively matches the feature distributions to defeat the distribution misalignment problem. Extensive experiments show that SNAIL remarkably outperforms the state-of-the-art domain adaptation methods, with over 25% improvements in open-class detection accuracy for some tasks.} }
Endnote
%0 Conference Paper %T SNAIL: Semi-Separated Uncertainty Adversarial Learning for Universal Domain Adaptation %A Zhongyi Han %A Wan Su %A Rundong He %A Yilong Yin %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-han23a %I PMLR %P 436--451 %U https://proceedings.mlr.press/v189/han23a.html %V 189 %X Universal domain adaptation (UniDA) is a new sub-topic of unsupervised domain adaptation. It handles the problem that the source or target domain possibly has open-class samples. The inborn challenge is to detect the open-class samples in the test phase. Pioneering studies could be viewed as dependent-detector-based methods. They cleverly design efficient uncertainty metrics (\eg, confidence, entropy, distance) based on the outputs of domain adaptation models (predictor) to detect open-class samples. However, they have a pain point in setting extremely-sensitive and task-dependent thresholds on the uncertainty metrics to filter open-class samples. To bypass this pain point, we propose a semi-separated-detector-based method, Semi-Separated Uncertainty Adversarial Learning (SNAIL). We build a semi-separated uncertainty decision-maker to enable sensitive-threshold-free detection. It receives multiple uncertainty metrics as attributes and separately learns the thresholds of uncertainty metrics in a multi-level decision rule. For some challenging tasks, the uncertainty margins between common and open classes are subtle, leading to difficulty learning optimal decision rules. We present the uncertainty separation loss to enlarge the uncertainty margin. Further, forcibly aligning the distributions could incorrectly align the open classes to common classes. Thanks to the open-class detection strategy, we design the conditional-weighted adversarial loss that adversarially and selectively matches the feature distributions to defeat the distribution misalignment problem. Extensive experiments show that SNAIL remarkably outperforms the state-of-the-art domain adaptation methods, with over 25% improvements in open-class detection accuracy for some tasks.
APA
Han, Z., Su, W., He, R. & Yin, Y.. (2023). SNAIL: Semi-Separated Uncertainty Adversarial Learning for Universal Domain Adaptation. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:436-451 Available from https://proceedings.mlr.press/v189/han23a.html.

Related Material