Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift

Mitsuhiro Fujikawa, Youhei Akimoto, Jun Sakuma, Kazuto Fukuchi
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:226-234, 2025.

Abstract

Transfer learning enhances prediction accuracy on a target distribution by leveraging data from a source distribution, demonstrating significant benefits in various applications. This paper introduces a novel dissimilarity measure that utilizes vicinity information, i.e., the local structure of data points, to analyze the excess error in classification under covariate shift, a transfer learning setting where marginal feature distributions differ but conditional label distributions remain the same. We characterize the excess error using the proposed measure and demonstrate faster or competitive convergence rates compared to previous techniques. Notably, our approach is effective in the support non-containment assumption, which often appears in real-world applications, holds. Our theoretical analysis bridges the gap between current theoretical findings and empirical observations in transfer learning, particularly in scenarios with significant differences between source and target distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-fujikawa25a, title = {Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift}, author = {Fujikawa, Mitsuhiro and Akimoto, Youhei and Sakuma, Jun and Fukuchi, Kazuto}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {226--234}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/fujikawa25a/fujikawa25a.pdf}, url = {https://proceedings.mlr.press/v258/fujikawa25a.html}, abstract = {Transfer learning enhances prediction accuracy on a target distribution by leveraging data from a source distribution, demonstrating significant benefits in various applications. This paper introduces a novel dissimilarity measure that utilizes vicinity information, i.e., the local structure of data points, to analyze the excess error in classification under covariate shift, a transfer learning setting where marginal feature distributions differ but conditional label distributions remain the same. We characterize the excess error using the proposed measure and demonstrate faster or competitive convergence rates compared to previous techniques. Notably, our approach is effective in the support non-containment assumption, which often appears in real-world applications, holds. Our theoretical analysis bridges the gap between current theoretical findings and empirical observations in transfer learning, particularly in scenarios with significant differences between source and target distributions.} }
Endnote
%0 Conference Paper %T Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift %A Mitsuhiro Fujikawa %A Youhei Akimoto %A Jun Sakuma %A Kazuto Fukuchi %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-fujikawa25a %I PMLR %P 226--234 %U https://proceedings.mlr.press/v258/fujikawa25a.html %V 258 %X Transfer learning enhances prediction accuracy on a target distribution by leveraging data from a source distribution, demonstrating significant benefits in various applications. This paper introduces a novel dissimilarity measure that utilizes vicinity information, i.e., the local structure of data points, to analyze the excess error in classification under covariate shift, a transfer learning setting where marginal feature distributions differ but conditional label distributions remain the same. We characterize the excess error using the proposed measure and demonstrate faster or competitive convergence rates compared to previous techniques. Notably, our approach is effective in the support non-containment assumption, which often appears in real-world applications, holds. Our theoretical analysis bridges the gap between current theoretical findings and empirical observations in transfer learning, particularly in scenarios with significant differences between source and target distributions.
APA
Fujikawa, M., Akimoto, Y., Sakuma, J. & Fukuchi, K.. (2025). Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:226-234 Available from https://proceedings.mlr.press/v258/fujikawa25a.html.

Related Material