A One-step Approach to Covariate Shift Adaptation

Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:65-80, 2020.

Abstract

A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution. However, such an assumption is often violated in the real world due to non-stationarity of the environment or bias in sample selection. In this work, we consider a prevalent setting called covariate shift, where the input distribution differs between the training and test stages while the conditional distribution of the output given the input remains unchanged. Most of the existing methods for covariate shift adaptation are two-step approaches, which first calculate the importance weights and then conduct importance-weighted empirical risk minimization. In this paper, we propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization by minimizing an upper bound of the test risk. We theoretically analyze the proposed method and provide a generalization error bound. We also empirically demonstrate the effectiveness of the proposed method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v129-zhang20a, title = {A One-step Approach to Covariate Shift Adaptation}, author = {Zhang, Tianyi and Yamane, Ikko and Lu, Nan and Sugiyama, Masashi}, booktitle = {Proceedings of The 12th Asian Conference on Machine Learning}, pages = {65--80}, year = {2020}, editor = {Pan, Sinno Jialin and Sugiyama, Masashi}, volume = {129}, series = {Proceedings of Machine Learning Research}, month = {18--20 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v129/zhang20a/zhang20a.pdf}, url = {https://proceedings.mlr.press/v129/zhang20a.html}, abstract = {A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution. However, such an assumption is often violated in the real world due to non-stationarity of the environment or bias in sample selection. In this work, we consider a prevalent setting called covariate shift, where the input distribution differs between the training and test stages while the conditional distribution of the output given the input remains unchanged. Most of the existing methods for covariate shift adaptation are two-step approaches, which first calculate the importance weights and then conduct importance-weighted empirical risk minimization. In this paper, we propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization by minimizing an upper bound of the test risk. We theoretically analyze the proposed method and provide a generalization error bound. We also empirically demonstrate the effectiveness of the proposed method.} }
Endnote
%0 Conference Paper %T A One-step Approach to Covariate Shift Adaptation %A Tianyi Zhang %A Ikko Yamane %A Nan Lu %A Masashi Sugiyama %B Proceedings of The 12th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Sinno Jialin Pan %E Masashi Sugiyama %F pmlr-v129-zhang20a %I PMLR %P 65--80 %U https://proceedings.mlr.press/v129/zhang20a.html %V 129 %X A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution. However, such an assumption is often violated in the real world due to non-stationarity of the environment or bias in sample selection. In this work, we consider a prevalent setting called covariate shift, where the input distribution differs between the training and test stages while the conditional distribution of the output given the input remains unchanged. Most of the existing methods for covariate shift adaptation are two-step approaches, which first calculate the importance weights and then conduct importance-weighted empirical risk minimization. In this paper, we propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization by minimizing an upper bound of the test risk. We theoretically analyze the proposed method and provide a generalization error bound. We also empirically demonstrate the effectiveness of the proposed method.
APA
Zhang, T., Yamane, I., Lu, N. & Sugiyama, M.. (2020). A One-step Approach to Covariate Shift Adaptation. Proceedings of The 12th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 129:65-80 Available from https://proceedings.mlr.press/v129/zhang20a.html.

Related Material