The information-theoretic value of unlabeled data in semi-supervised learning

Alexander Golovnev, David Pal, Balazs Szorenyi
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2328-2336, 2019.

Abstract

We quantify the separation between the numbers of labeled examples required to learn in two settings: Settings with and without the knowledge of the distribution of the unlabeled data. More specifically, we prove a separation by $\Theta(\log n)$ multiplicative factor for the class of projections over the Boolean hypercube of dimension $n$. We prove that there is no separation for the class of all functions on domain of any size. Learning with the knowledge of the distribution (a.k.a. fixed-distribution learning) can be viewed as an idealized scenario of semi-supervised learning where the number of unlabeled data points is so great that the unlabeled distribution is known exactly. For this reason, we call the separation the value of unlabeled data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-golovnev19a, title = {The information-theoretic value of unlabeled data in semi-supervised learning}, author = {Golovnev, Alexander and Pal, David and Szorenyi, Balazs}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2328--2336}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/golovnev19a/golovnev19a.pdf}, url = {https://proceedings.mlr.press/v97/golovnev19a.html}, abstract = {We quantify the separation between the numbers of labeled examples required to learn in two settings: Settings with and without the knowledge of the distribution of the unlabeled data. More specifically, we prove a separation by $\Theta(\log n)$ multiplicative factor for the class of projections over the Boolean hypercube of dimension $n$. We prove that there is no separation for the class of all functions on domain of any size. Learning with the knowledge of the distribution (a.k.a. fixed-distribution learning) can be viewed as an idealized scenario of semi-supervised learning where the number of unlabeled data points is so great that the unlabeled distribution is known exactly. For this reason, we call the separation the value of unlabeled data.} }
Endnote
%0 Conference Paper %T The information-theoretic value of unlabeled data in semi-supervised learning %A Alexander Golovnev %A David Pal %A Balazs Szorenyi %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-golovnev19a %I PMLR %P 2328--2336 %U https://proceedings.mlr.press/v97/golovnev19a.html %V 97 %X We quantify the separation between the numbers of labeled examples required to learn in two settings: Settings with and without the knowledge of the distribution of the unlabeled data. More specifically, we prove a separation by $\Theta(\log n)$ multiplicative factor for the class of projections over the Boolean hypercube of dimension $n$. We prove that there is no separation for the class of all functions on domain of any size. Learning with the knowledge of the distribution (a.k.a. fixed-distribution learning) can be viewed as an idealized scenario of semi-supervised learning where the number of unlabeled data points is so great that the unlabeled distribution is known exactly. For this reason, we call the separation the value of unlabeled data.
APA
Golovnev, A., Pal, D. & Szorenyi, B.. (2019). The information-theoretic value of unlabeled data in semi-supervised learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2328-2336 Available from https://proceedings.mlr.press/v97/golovnev19a.html.

Related Material