On the Relationship between Data Efficiency and Error for Uncertainty Sampling

Stephen Mussmann, Percy Liang
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3674-3682, 2018.

Abstract

While active learning offers potential cost savings, the actual data efficiency—the reduction in amount of labeled data needed to obtain the same error rate—observed in practice is mixed. This paper poses a basic question: when is active learning actually helpful? We provide an answer for logistic regression with the popular active learning algorithm, uncertainty sampling. Empirically, on 21 datasets from OpenML, we find a strong inverse correlation between data efficiency and the error rate of the final classifier. Theoretically, we show that for a variant of uncertainty sampling, the asymptotic data efficiency is within a constant factor of the inverse error rate of the limiting classifier.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-mussmann18a, title = {On the Relationship between Data Efficiency and Error for Uncertainty Sampling}, author = {Mussmann, Stephen and Liang, Percy}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3674--3682}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/mussmann18a/mussmann18a.pdf}, url = {http://proceedings.mlr.press/v80/mussmann18a.html}, abstract = {While active learning offers potential cost savings, the actual data efficiency—the reduction in amount of labeled data needed to obtain the same error rate—observed in practice is mixed. This paper poses a basic question: when is active learning actually helpful? We provide an answer for logistic regression with the popular active learning algorithm, uncertainty sampling. Empirically, on 21 datasets from OpenML, we find a strong inverse correlation between data efficiency and the error rate of the final classifier. Theoretically, we show that for a variant of uncertainty sampling, the asymptotic data efficiency is within a constant factor of the inverse error rate of the limiting classifier.} }
Endnote
%0 Conference Paper %T On the Relationship between Data Efficiency and Error for Uncertainty Sampling %A Stephen Mussmann %A Percy Liang %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-mussmann18a %I PMLR %P 3674--3682 %U http://proceedings.mlr.press/v80/mussmann18a.html %V 80 %X While active learning offers potential cost savings, the actual data efficiency—the reduction in amount of labeled data needed to obtain the same error rate—observed in practice is mixed. This paper poses a basic question: when is active learning actually helpful? We provide an answer for logistic regression with the popular active learning algorithm, uncertainty sampling. Empirically, on 21 datasets from OpenML, we find a strong inverse correlation between data efficiency and the error rate of the final classifier. Theoretically, we show that for a variant of uncertainty sampling, the asymptotic data efficiency is within a constant factor of the inverse error rate of the limiting classifier.
APA
Mussmann, S. & Liang, P.. (2018). On the Relationship between Data Efficiency and Error for Uncertainty Sampling. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3674-3682 Available from http://proceedings.mlr.press/v80/mussmann18a.html.

Related Material