Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias

Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:595-603, 2024.

Abstract

Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, \texttt{softmax} prediction probabilities are often used as a confidence measure, although they are known to be overconfident, even for wrong predictions. This phenomenon is particularly intensified in the presence of sample selection bias, i.e., when data labeling is subject to some constraints. To address this issue, we propose a novel confidence measure, called $\mathcal{T}$-similarity, built upon the prediction diversity of an ensemble of linear classifiers. We provide the theoretical analysis of our approach by studying stationary points and describing the relationship between the diversity of the individual members and their performance. We empirically demonstrate the benefit of our confidence measure for three different pseudo-labeling policies on classification datasets of various data modalities. The code is available at https://github.com/ambroiseodt/tsim.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-odonnat24a, title = { Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias }, author = {Odonnat, Ambroise and Feofanov, Vasilii and Redko, Ievgen}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {595--603}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/odonnat24a/odonnat24a.pdf}, url = {https://proceedings.mlr.press/v238/odonnat24a.html}, abstract = { Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, \texttt{softmax} prediction probabilities are often used as a confidence measure, although they are known to be overconfident, even for wrong predictions. This phenomenon is particularly intensified in the presence of sample selection bias, i.e., when data labeling is subject to some constraints. To address this issue, we propose a novel confidence measure, called $\mathcal{T}$-similarity, built upon the prediction diversity of an ensemble of linear classifiers. We provide the theoretical analysis of our approach by studying stationary points and describing the relationship between the diversity of the individual members and their performance. We empirically demonstrate the benefit of our confidence measure for three different pseudo-labeling policies on classification datasets of various data modalities. The code is available at https://github.com/ambroiseodt/tsim. } }
Endnote
%0 Conference Paper %T Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias %A Ambroise Odonnat %A Vasilii Feofanov %A Ievgen Redko %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-odonnat24a %I PMLR %P 595--603 %U https://proceedings.mlr.press/v238/odonnat24a.html %V 238 %X Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, \texttt{softmax} prediction probabilities are often used as a confidence measure, although they are known to be overconfident, even for wrong predictions. This phenomenon is particularly intensified in the presence of sample selection bias, i.e., when data labeling is subject to some constraints. To address this issue, we propose a novel confidence measure, called $\mathcal{T}$-similarity, built upon the prediction diversity of an ensemble of linear classifiers. We provide the theoretical analysis of our approach by studying stationary points and describing the relationship between the diversity of the individual members and their performance. We empirically demonstrate the benefit of our confidence measure for three different pseudo-labeling policies on classification datasets of various data modalities. The code is available at https://github.com/ambroiseodt/tsim.
APA
Odonnat, A., Feofanov, V. & Redko, I.. (2024). Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:595-603 Available from https://proceedings.mlr.press/v238/odonnat24a.html.

Related Material