Continual Domain Adversarial Adaptation via Double-Head Discriminators

Yan Shen, Zhanghexuan Ji, Chunwei Ma, Mingchen Gao
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2584-2592, 2024.

Abstract

Domain adversarial adaptation in a continual setting poses significant challenges due to the limitations of accessing previous source domain data. Despite extensive research in continual learning, adversarial adaptation cannot be effectively accomplished using only a small number of stored source domain data, a standard setting in memory replay approaches. This limitation arises from the erroneous empirical estimation of $\mathcal{H}$-divergence with few source domain samples. To tackle this problem, we propose a double-head discriminator algorithm by introducing an addition source-only domain discriminator trained solely on the source learning phase. We prove that by introducing a pre-trained source-only domain discriminator, the empirical estimation error of $\mathcal{H}$-divergence related adversarial loss is reduced from the source domain side. Further experiments on existing domain adaptation benchmarks show that our proposed algorithm achieves more than 2$%$ improvement on all categories of target domain adaptation tasks while significantly mitigating the forgetting of the source domain.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-shen24a, title = {Continual Domain Adversarial Adaptation via Double-Head Discriminators}, author = {Shen, Yan and Ji, Zhanghexuan and Ma, Chunwei and Gao, Mingchen}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2584--2592}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/shen24a/shen24a.pdf}, url = {https://proceedings.mlr.press/v238/shen24a.html}, abstract = {Domain adversarial adaptation in a continual setting poses significant challenges due to the limitations of accessing previous source domain data. Despite extensive research in continual learning, adversarial adaptation cannot be effectively accomplished using only a small number of stored source domain data, a standard setting in memory replay approaches. This limitation arises from the erroneous empirical estimation of $\mathcal{H}$-divergence with few source domain samples. To tackle this problem, we propose a double-head discriminator algorithm by introducing an addition source-only domain discriminator trained solely on the source learning phase. We prove that by introducing a pre-trained source-only domain discriminator, the empirical estimation error of $\mathcal{H}$-divergence related adversarial loss is reduced from the source domain side. Further experiments on existing domain adaptation benchmarks show that our proposed algorithm achieves more than 2$%$ improvement on all categories of target domain adaptation tasks while significantly mitigating the forgetting of the source domain.} }
Endnote
%0 Conference Paper %T Continual Domain Adversarial Adaptation via Double-Head Discriminators %A Yan Shen %A Zhanghexuan Ji %A Chunwei Ma %A Mingchen Gao %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-shen24a %I PMLR %P 2584--2592 %U https://proceedings.mlr.press/v238/shen24a.html %V 238 %X Domain adversarial adaptation in a continual setting poses significant challenges due to the limitations of accessing previous source domain data. Despite extensive research in continual learning, adversarial adaptation cannot be effectively accomplished using only a small number of stored source domain data, a standard setting in memory replay approaches. This limitation arises from the erroneous empirical estimation of $\mathcal{H}$-divergence with few source domain samples. To tackle this problem, we propose a double-head discriminator algorithm by introducing an addition source-only domain discriminator trained solely on the source learning phase. We prove that by introducing a pre-trained source-only domain discriminator, the empirical estimation error of $\mathcal{H}$-divergence related adversarial loss is reduced from the source domain side. Further experiments on existing domain adaptation benchmarks show that our proposed algorithm achieves more than 2$%$ improvement on all categories of target domain adaptation tasks while significantly mitigating the forgetting of the source domain.
APA
Shen, Y., Ji, Z., Ma, C. & Gao, M.. (2024). Continual Domain Adversarial Adaptation via Double-Head Discriminators. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2584-2592 Available from https://proceedings.mlr.press/v238/shen24a.html.

Related Material