Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts

Bahar Taskesen, Man-Chung Yue, Jose Blanchet, Daniel Kuhn, Viet Anh Nguyen
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10162-10172, 2021.

Abstract

Least squares estimators, when trained on few target domain samples, may predict poorly. Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution that is close to the target distribution. Given available data, we investigate novel strategies to synthesize a family of least squares estimator experts that are robust with regard to moment conditions. When these moment conditions are specified using Kullback-Leibler or Wasserstein-type divergences, we can find the robust estimators efficiently using convex optimization. We use the Bernstein online aggregation algorithm on the proposed family of robust experts to generate predictions for the sequential stream of target test samples. Numerical experiments on real data show that the robust strategies systematically outperform non-robust interpolations of the empirical least squares estimators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-taskesen21a, title = {Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts}, author = {Taskesen, Bahar and Yue, Man-Chung and Blanchet, Jose and Kuhn, Daniel and Nguyen, Viet Anh}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10162--10172}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/taskesen21a/taskesen21a.pdf}, url = {https://proceedings.mlr.press/v139/taskesen21a.html}, abstract = {Least squares estimators, when trained on few target domain samples, may predict poorly. Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution that is close to the target distribution. Given available data, we investigate novel strategies to synthesize a family of least squares estimator experts that are robust with regard to moment conditions. When these moment conditions are specified using Kullback-Leibler or Wasserstein-type divergences, we can find the robust estimators efficiently using convex optimization. We use the Bernstein online aggregation algorithm on the proposed family of robust experts to generate predictions for the sequential stream of target test samples. Numerical experiments on real data show that the robust strategies systematically outperform non-robust interpolations of the empirical least squares estimators.} }
Endnote
%0 Conference Paper %T Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts %A Bahar Taskesen %A Man-Chung Yue %A Jose Blanchet %A Daniel Kuhn %A Viet Anh Nguyen %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-taskesen21a %I PMLR %P 10162--10172 %U https://proceedings.mlr.press/v139/taskesen21a.html %V 139 %X Least squares estimators, when trained on few target domain samples, may predict poorly. Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution that is close to the target distribution. Given available data, we investigate novel strategies to synthesize a family of least squares estimator experts that are robust with regard to moment conditions. When these moment conditions are specified using Kullback-Leibler or Wasserstein-type divergences, we can find the robust estimators efficiently using convex optimization. We use the Bernstein online aggregation algorithm on the proposed family of robust experts to generate predictions for the sequential stream of target test samples. Numerical experiments on real data show that the robust strategies systematically outperform non-robust interpolations of the empirical least squares estimators.
APA
Taskesen, B., Yue, M., Blanchet, J., Kuhn, D. & Nguyen, V.A.. (2021). Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10162-10172 Available from https://proceedings.mlr.press/v139/taskesen21a.html.

Related Material