[edit]
SNAIL: Semi-Separated Uncertainty Adversarial Learning for Universal Domain Adaptation
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:436-451, 2023.
Abstract
Universal domain adaptation (UniDA) is a new
sub-topic of unsupervised domain adaptation. It
handles the problem that the source or target domain
possibly has open-class samples. The inborn
challenge is to detect the open-class samples in the
test phase. Pioneering studies could be viewed as
dependent-detector-based methods. They cleverly
design efficient uncertainty metrics (\eg,
confidence, entropy, distance) based on the outputs
of domain adaptation models (predictor) to detect
open-class samples. However, they have a pain point
in setting extremely-sensitive and task-dependent
thresholds on the uncertainty metrics to filter
open-class samples. To bypass this pain point, we
propose a semi-separated-detector-based method,
Semi-Separated Uncertainty Adversarial Learning
(SNAIL). We build a semi-separated uncertainty
decision-maker to enable sensitive-threshold-free
detection. It receives multiple uncertainty metrics
as attributes and separately learns the thresholds
of uncertainty metrics in a multi-level decision
rule. For some challenging tasks, the uncertainty
margins between common and open classes are subtle,
leading to difficulty learning optimal decision
rules. We present the uncertainty separation loss to
enlarge the uncertainty margin. Further, forcibly
aligning the distributions could incorrectly align
the open classes to common classes. Thanks to the
open-class detection strategy, we design the
conditional-weighted adversarial loss that
adversarially and selectively matches the feature
distributions to defeat the distribution
misalignment problem. Extensive experiments show
that SNAIL remarkably outperforms the
state-of-the-art domain adaptation methods, with
over 25% improvements in open-class detection
accuracy for some tasks.