Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment

Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6872-6881, 2019.

Abstract

Domain adaptation addresses the common situation in which the target distribution generating our test data differs from the source distribution generating our training data. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, an approach often motivated as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, a problem guaranteed to arise under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-wu19f, title = {Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment}, author = {Wu, Yifan and Winston, Ezra and Kaushik, Divyansh and Lipton, Zachary}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6872--6881}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/wu19f/wu19f.pdf}, url = {https://proceedings.mlr.press/v97/wu19f.html}, abstract = {Domain adaptation addresses the common situation in which the target distribution generating our test data differs from the source distribution generating our training data. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, an approach often motivated as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, a problem guaranteed to arise under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.} }
Endnote
%0 Conference Paper %T Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment %A Yifan Wu %A Ezra Winston %A Divyansh Kaushik %A Zachary Lipton %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-wu19f %I PMLR %P 6872--6881 %U https://proceedings.mlr.press/v97/wu19f.html %V 97 %X Domain adaptation addresses the common situation in which the target distribution generating our test data differs from the source distribution generating our training data. While absent assumptions, domain adaptation is impossible, strict conditions, e.g. covariate or label shift, enable principled algorithms. Recently-proposed domain-adversarial approaches consist of aligning source and target encodings, an approach often motivated as minimizing two (of three) terms in a theoretical bound on target error. Unfortunately, this minimization can cause arbitrary increases in the third term, a problem guaranteed to arise under shifting label distributions. We propose asymmetrically-relaxed distribution alignment, a new approach that overcomes some limitations of standard domain-adversarial algorithms. Moreover, we characterize precise assumptions under which our algorithm is theoretically principled and demonstrate empirical benefits on both synthetic and real datasets.
APA
Wu, Y., Winston, E., Kaushik, D. & Lipton, Z.. (2019). Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6872-6881 Available from https://proceedings.mlr.press/v97/wu19f.html.

Related Material