Large deviations for the perceptron model and consequences for active learning

Hugo Cui, Luca Saglietti, Lenka Zdeborova
Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR 107:390-430, 2020.

Abstract

Active learning is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any active learning algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing active learning algorithms. We also provide a comparison with the performance of some other popular active learning strategies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v107-cui20a, title = {Large deviations for the perceptron model and consequences for active learning}, author = {Cui, Hugo and Saglietti, Luca and Zdeborova, Lenka}, booktitle = {Proceedings of The First Mathematical and Scientific Machine Learning Conference}, pages = {390--430}, year = {2020}, editor = {Lu, Jianfeng and Ward, Rachel}, volume = {107}, series = {Proceedings of Machine Learning Research}, month = {20--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v107/cui20a/cui20a.pdf}, url = {https://proceedings.mlr.press/v107/cui20a.html}, abstract = { Active learning is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any active learning algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing active learning algorithms. We also provide a comparison with the performance of some other popular active learning strategies. } }
Endnote
%0 Conference Paper %T Large deviations for the perceptron model and consequences for active learning %A Hugo Cui %A Luca Saglietti %A Lenka Zdeborova %B Proceedings of The First Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2020 %E Jianfeng Lu %E Rachel Ward %F pmlr-v107-cui20a %I PMLR %P 390--430 %U https://proceedings.mlr.press/v107/cui20a.html %V 107 %X Active learning is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any active learning algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing active learning algorithms. We also provide a comparison with the performance of some other popular active learning strategies.
APA
Cui, H., Saglietti, L. & Zdeborova, L.. (2020). Large deviations for the perceptron model and consequences for active learning. Proceedings of The First Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 107:390-430 Available from https://proceedings.mlr.press/v107/cui20a.html.

Related Material