An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel Rodrigues
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7433-7449, 2022.

Abstract

A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution. However, this assumption is not satisfied in many applications. In many scenarios, the data is collected sequentially (e.g., healthcare) and the distribution of the data may change over time often exhibiting so-called covariate shifts. In this paper, we propose an approach for semi-supervised learning algorithms that is capable of addressing this issue. Our framework also recovers some popular methods, including entropy minimization and pseudo-labeling. We provide new information-theoretical based generalization error upper bounds inspired by our novel framework. Our bounds are applicable to both general semi-supervised learning and the covariate-shift scenario. Finally, we show numerically that our method outperforms previous approaches proposed for semi-supervised learning under the covariate shift.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-aminian22a, title = { An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift }, author = {Aminian, Gholamali and Abroshan, Mahed and Mahdi Khalili, Mohammad and Toni, Laura and Rodrigues, Miguel}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7433--7449}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/aminian22a/aminian22a.pdf}, url = {https://proceedings.mlr.press/v151/aminian22a.html}, abstract = { A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution. However, this assumption is not satisfied in many applications. In many scenarios, the data is collected sequentially (e.g., healthcare) and the distribution of the data may change over time often exhibiting so-called covariate shifts. In this paper, we propose an approach for semi-supervised learning algorithms that is capable of addressing this issue. Our framework also recovers some popular methods, including entropy minimization and pseudo-labeling. We provide new information-theoretical based generalization error upper bounds inspired by our novel framework. Our bounds are applicable to both general semi-supervised learning and the covariate-shift scenario. Finally, we show numerically that our method outperforms previous approaches proposed for semi-supervised learning under the covariate shift. } }
Endnote
%0 Conference Paper %T An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift %A Gholamali Aminian %A Mahed Abroshan %A Mohammad Mahdi Khalili %A Laura Toni %A Miguel Rodrigues %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-aminian22a %I PMLR %P 7433--7449 %U https://proceedings.mlr.press/v151/aminian22a.html %V 151 %X A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution. However, this assumption is not satisfied in many applications. In many scenarios, the data is collected sequentially (e.g., healthcare) and the distribution of the data may change over time often exhibiting so-called covariate shifts. In this paper, we propose an approach for semi-supervised learning algorithms that is capable of addressing this issue. Our framework also recovers some popular methods, including entropy minimization and pseudo-labeling. We provide new information-theoretical based generalization error upper bounds inspired by our novel framework. Our bounds are applicable to both general semi-supervised learning and the covariate-shift scenario. Finally, we show numerically that our method outperforms previous approaches proposed for semi-supervised learning under the covariate shift.
APA
Aminian, G., Abroshan, M., Mahdi Khalili, M., Toni, L. & Rodrigues, M.. (2022). An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7433-7449 Available from https://proceedings.mlr.press/v151/aminian22a.html.

Related Material