Learning Semantic Representations for Unsupervised Domain Adaptation

Shaoan Xie, Zibin Zheng, Liang Chen, Chuan Chen
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5423-5432, 2018.

Abstract

It is important to transfer the knowledge from label-rich source domain to unlabeled target domain due to the expensive cost of manual labeling efforts. Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e.g., features of backpacks in target domain might be mapped near features of cars in source domain. In this paper, we present moving semantic transfer network, which learn semantic representations for unlabeled target samples by aligning labeled source centroid and pseudo-labeled target centroid. Features in same class but different domains are expected to be mapped nearby, resulting in an improved target classification accuracy. Moving average centroid alignment is cautiously designed to compensate the insufficient categorical information within each mini batch. Experiments testify that our model yields state of the art results on standard datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-xie18c, title = {Learning Semantic Representations for Unsupervised Domain Adaptation}, author = {Xie, Shaoan and Zheng, Zibin and Chen, Liang and Chen, Chuan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5423--5432}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/xie18c/xie18c.pdf}, url = {http://proceedings.mlr.press/v80/xie18c.html}, abstract = {It is important to transfer the knowledge from label-rich source domain to unlabeled target domain due to the expensive cost of manual labeling efforts. Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e.g., features of backpacks in target domain might be mapped near features of cars in source domain. In this paper, we present moving semantic transfer network, which learn semantic representations for unlabeled target samples by aligning labeled source centroid and pseudo-labeled target centroid. Features in same class but different domains are expected to be mapped nearby, resulting in an improved target classification accuracy. Moving average centroid alignment is cautiously designed to compensate the insufficient categorical information within each mini batch. Experiments testify that our model yields state of the art results on standard datasets.} }
Endnote
%0 Conference Paper %T Learning Semantic Representations for Unsupervised Domain Adaptation %A Shaoan Xie %A Zibin Zheng %A Liang Chen %A Chuan Chen %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-xie18c %I PMLR %P 5423--5432 %U http://proceedings.mlr.press/v80/xie18c.html %V 80 %X It is important to transfer the knowledge from label-rich source domain to unlabeled target domain due to the expensive cost of manual labeling efforts. Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e.g., features of backpacks in target domain might be mapped near features of cars in source domain. In this paper, we present moving semantic transfer network, which learn semantic representations for unlabeled target samples by aligning labeled source centroid and pseudo-labeled target centroid. Features in same class but different domains are expected to be mapped nearby, resulting in an improved target classification accuracy. Moving average centroid alignment is cautiously designed to compensate the insufficient categorical information within each mini batch. Experiments testify that our model yields state of the art results on standard datasets.
APA
Xie, S., Zheng, Z., Chen, L. & Chen, C.. (2018). Learning Semantic Representations for Unsupervised Domain Adaptation. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5423-5432 Available from http://proceedings.mlr.press/v80/xie18c.html.

Related Material