Asymmetric Tri-training for Unsupervised Domain Adaptation

Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2988-2997, 2017.

Abstract

It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that artificially labeling the target samples can result in a good representation. Tri-training leverages three classifiers equally to provide pseudo-labels to unlabeled samples; however, the method does not assume labeling samples generated from a different domain. In this paper, we propose the use of an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train the neural networks as if they are true labels. In our work, we use three networks asymmetrically, and by asymmetric, we mean that two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Our proposed method was shown to achieve a state-of-the-art performance on the benchmark digit recognition datasets for domain adaptation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-saito17a, title = {Asymmetric Tri-training for Unsupervised Domain Adaptation}, author = {Kuniaki Saito and Yoshitaka Ushiku and Tatsuya Harada}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2988--2997}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/saito17a/saito17a.pdf}, url = {https://proceedings.mlr.press/v70/saito17a.html}, abstract = {It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that artificially labeling the target samples can result in a good representation. Tri-training leverages three classifiers equally to provide pseudo-labels to unlabeled samples; however, the method does not assume labeling samples generated from a different domain. In this paper, we propose the use of an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train the neural networks as if they are true labels. In our work, we use three networks asymmetrically, and by asymmetric, we mean that two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Our proposed method was shown to achieve a state-of-the-art performance on the benchmark digit recognition datasets for domain adaptation.} }
Endnote
%0 Conference Paper %T Asymmetric Tri-training for Unsupervised Domain Adaptation %A Kuniaki Saito %A Yoshitaka Ushiku %A Tatsuya Harada %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-saito17a %I PMLR %P 2988--2997 %U https://proceedings.mlr.press/v70/saito17a.html %V 70 %X It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that artificially labeling the target samples can result in a good representation. Tri-training leverages three classifiers equally to provide pseudo-labels to unlabeled samples; however, the method does not assume labeling samples generated from a different domain. In this paper, we propose the use of an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train the neural networks as if they are true labels. In our work, we use three networks asymmetrically, and by asymmetric, we mean that two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Our proposed method was shown to achieve a state-of-the-art performance on the benchmark digit recognition datasets for domain adaptation.
APA
Saito, K., Ushiku, Y. & Harada, T.. (2017). Asymmetric Tri-training for Unsupervised Domain Adaptation. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2988-2997 Available from https://proceedings.mlr.press/v70/saito17a.html.

Related Material