Self-training Converts Weak Learners to Strong Learners in Mixture Models

Spencer Frei, Difan Zou, Zixiang Chen, Quanquan Gu
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8003-8021, 2022.

Abstract

We consider a binary classification problem when the data comes from a mixture of two rotationally symmetric distributions satisfying concentration and anti-concentration properties enjoyed by log-concave distributions among others. We show that there exists a universal constant $C_{\mathrm{err}}>0$ such that if a pseudolabeler $\beta_{\mathrm{pl}}$ can achieve classification error at most $C_{\mathrm{err}}$, then for any $\varepsilon>0$, an iterative self-training algorithm initialized at $\beta_0 := \beta_{\mathrm{pl}}$ using pseudolabels $\hat y = \mathrm{sgn}(⟨\beta_t, \xb⟩)$ and using at most $\tilde O(d/\varepsilon^2)$ unlabeled examples suffices to learn the Bayes-optimal classifier up to $\varepsilon$ error, where $d$ is the ambient dimension. That is, self-training converts weak learners to strong learners using only unlabeled examples. We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $\beta_{\mathrm{pl}}$ with classification error $C_{\mathrm{err}}$ using only $O(d)$ labeled examples (i.e., independent of $\varepsilon$). Together our results imply that mixture models can be learned to within $\varepsilon$ of the Bayes-optimal accuracy using at most $O(d)$ labeled examples and $\tilde O(d/\varepsilon^2)$ unlabeled examples by way of a semi-supervised self-training algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-frei22a, title = { Self-training Converts Weak Learners to Strong Learners in Mixture Models }, author = {Frei, Spencer and Zou, Difan and Chen, Zixiang and Gu, Quanquan}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {8003--8021}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/frei22a/frei22a.pdf}, url = {https://proceedings.mlr.press/v151/frei22a.html}, abstract = { We consider a binary classification problem when the data comes from a mixture of two rotationally symmetric distributions satisfying concentration and anti-concentration properties enjoyed by log-concave distributions among others. We show that there exists a universal constant $C_{\mathrm{err}}>0$ such that if a pseudolabeler $\beta_{\mathrm{pl}}$ can achieve classification error at most $C_{\mathrm{err}}$, then for any $\varepsilon>0$, an iterative self-training algorithm initialized at $\beta_0 := \beta_{\mathrm{pl}}$ using pseudolabels $\hat y = \mathrm{sgn}(⟨\beta_t, \xb⟩)$ and using at most $\tilde O(d/\varepsilon^2)$ unlabeled examples suffices to learn the Bayes-optimal classifier up to $\varepsilon$ error, where $d$ is the ambient dimension. That is, self-training converts weak learners to strong learners using only unlabeled examples. We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $\beta_{\mathrm{pl}}$ with classification error $C_{\mathrm{err}}$ using only $O(d)$ labeled examples (i.e., independent of $\varepsilon$). Together our results imply that mixture models can be learned to within $\varepsilon$ of the Bayes-optimal accuracy using at most $O(d)$ labeled examples and $\tilde O(d/\varepsilon^2)$ unlabeled examples by way of a semi-supervised self-training algorithm. } }
Endnote
%0 Conference Paper %T Self-training Converts Weak Learners to Strong Learners in Mixture Models %A Spencer Frei %A Difan Zou %A Zixiang Chen %A Quanquan Gu %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-frei22a %I PMLR %P 8003--8021 %U https://proceedings.mlr.press/v151/frei22a.html %V 151 %X We consider a binary classification problem when the data comes from a mixture of two rotationally symmetric distributions satisfying concentration and anti-concentration properties enjoyed by log-concave distributions among others. We show that there exists a universal constant $C_{\mathrm{err}}>0$ such that if a pseudolabeler $\beta_{\mathrm{pl}}$ can achieve classification error at most $C_{\mathrm{err}}$, then for any $\varepsilon>0$, an iterative self-training algorithm initialized at $\beta_0 := \beta_{\mathrm{pl}}$ using pseudolabels $\hat y = \mathrm{sgn}(⟨\beta_t, \xb⟩)$ and using at most $\tilde O(d/\varepsilon^2)$ unlabeled examples suffices to learn the Bayes-optimal classifier up to $\varepsilon$ error, where $d$ is the ambient dimension. That is, self-training converts weak learners to strong learners using only unlabeled examples. We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $\beta_{\mathrm{pl}}$ with classification error $C_{\mathrm{err}}$ using only $O(d)$ labeled examples (i.e., independent of $\varepsilon$). Together our results imply that mixture models can be learned to within $\varepsilon$ of the Bayes-optimal accuracy using at most $O(d)$ labeled examples and $\tilde O(d/\varepsilon^2)$ unlabeled examples by way of a semi-supervised self-training algorithm.
APA
Frei, S., Zou, D., Chen, Z. & Gu, Q.. (2022). Self-training Converts Weak Learners to Strong Learners in Mixture Models . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:8003-8021 Available from https://proceedings.mlr.press/v151/frei22a.html.

Related Material