Few-shot Adaptation to Distribution Shifts By Mixing Source and Target Embeddings

Yihao Xue, Ali Payani, Yu Yang, Baharan Mirzasoleiman
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:55569-55594, 2024.

Abstract

Pretrained machine learning models need to be adapted to distribution shifts when deployed in new target environments. When obtaining labeled data from the target distribution is expensive, few-shot adaptation with only a few examples from the target distribution becomes essential. In this work, we propose MixPro, a lightweight and highly data-efficient approach for few-shot adaptation. MixPro first generates a relatively large dataset by mixing (linearly combining) pre-trained embeddings of large source data with those of the few target examples. This process preserves important features of both source and target distributions, while mitigating the specific noise in the small target data. Then, it trains a linear classifier on the mixed embeddings to effectively adapts the model to the target distribution without overfitting the small target data. Theoretically, we demonstrate the advantages of MixPro over previous methods. Our experiments, conducted across various model architectures on 8 datasets featuring different types of distribution shifts, reveal that MixPro can outperform baselines by as much as 7%, with only 2-4 target examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-xue24a, title = {Few-shot Adaptation to Distribution Shifts By Mixing Source and Target Embeddings}, author = {Xue, Yihao and Payani, Ali and Yang, Yu and Mirzasoleiman, Baharan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {55569--55594}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/xue24a/xue24a.pdf}, url = {https://proceedings.mlr.press/v235/xue24a.html}, abstract = {Pretrained machine learning models need to be adapted to distribution shifts when deployed in new target environments. When obtaining labeled data from the target distribution is expensive, few-shot adaptation with only a few examples from the target distribution becomes essential. In this work, we propose MixPro, a lightweight and highly data-efficient approach for few-shot adaptation. MixPro first generates a relatively large dataset by mixing (linearly combining) pre-trained embeddings of large source data with those of the few target examples. This process preserves important features of both source and target distributions, while mitigating the specific noise in the small target data. Then, it trains a linear classifier on the mixed embeddings to effectively adapts the model to the target distribution without overfitting the small target data. Theoretically, we demonstrate the advantages of MixPro over previous methods. Our experiments, conducted across various model architectures on 8 datasets featuring different types of distribution shifts, reveal that MixPro can outperform baselines by as much as 7%, with only 2-4 target examples.} }
Endnote
%0 Conference Paper %T Few-shot Adaptation to Distribution Shifts By Mixing Source and Target Embeddings %A Yihao Xue %A Ali Payani %A Yu Yang %A Baharan Mirzasoleiman %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-xue24a %I PMLR %P 55569--55594 %U https://proceedings.mlr.press/v235/xue24a.html %V 235 %X Pretrained machine learning models need to be adapted to distribution shifts when deployed in new target environments. When obtaining labeled data from the target distribution is expensive, few-shot adaptation with only a few examples from the target distribution becomes essential. In this work, we propose MixPro, a lightweight and highly data-efficient approach for few-shot adaptation. MixPro first generates a relatively large dataset by mixing (linearly combining) pre-trained embeddings of large source data with those of the few target examples. This process preserves important features of both source and target distributions, while mitigating the specific noise in the small target data. Then, it trains a linear classifier on the mixed embeddings to effectively adapts the model to the target distribution without overfitting the small target data. Theoretically, we demonstrate the advantages of MixPro over previous methods. Our experiments, conducted across various model architectures on 8 datasets featuring different types of distribution shifts, reveal that MixPro can outperform baselines by as much as 7%, with only 2-4 target examples.
APA
Xue, Y., Payani, A., Yang, Y. & Mirzasoleiman, B.. (2024). Few-shot Adaptation to Distribution Shifts By Mixing Source and Target Embeddings. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:55569-55594 Available from https://proceedings.mlr.press/v235/xue24a.html.

Related Material