Non-uniform Consistency of Online Learning with Random Sampling

Changlong Wu, Narayana Santhanam
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:1265-1285, 2021.

Abstract

We study the problem of online learning a hypothesis class and a given binary 0-1 loss function, using instances generated $i.i.d.$ by a given distribution. The goal of the online learner is to make only finitely many errors (loss 1) with probability $1$ in the infinite horizon. In the binary label case, we show that hypothesis classes are online learnable in the above sense if and only if the class is effectively countable. We extend the results to hypothesis classes where labels can be non-binary. Characterization of non-binary online learnable classes is more involved for general loss functions and is not captured fully by the countability condition even for the ternary label case. In the computational bounded setup, we compare our results with well known results in recursive function learning, showing that the class of all total computable functions is indeed learnable with computable online learners and randomized sampling. Finally, we also show that the finite error guarantee will not be affected even when independent noise is added to the label.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-wu21a, title = {Non-uniform Consistency of Online Learning with Random Sampling}, author = {Wu, Changlong and Santhanam, Narayana}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {1265--1285}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/wu21a/wu21a.pdf}, url = {https://proceedings.mlr.press/v132/wu21a.html}, abstract = { We study the problem of online learning a hypothesis class and a given binary 0-1 loss function, using instances generated $i.i.d.$ by a given distribution. The goal of the online learner is to make only finitely many errors (loss 1) with probability $1$ in the infinite horizon. In the binary label case, we show that hypothesis classes are online learnable in the above sense if and only if the class is effectively countable. We extend the results to hypothesis classes where labels can be non-binary. Characterization of non-binary online learnable classes is more involved for general loss functions and is not captured fully by the countability condition even for the ternary label case. In the computational bounded setup, we compare our results with well known results in recursive function learning, showing that the class of all total computable functions is indeed learnable with computable online learners and randomized sampling. Finally, we also show that the finite error guarantee will not be affected even when independent noise is added to the label.} }
Endnote
%0 Conference Paper %T Non-uniform Consistency of Online Learning with Random Sampling %A Changlong Wu %A Narayana Santhanam %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-wu21a %I PMLR %P 1265--1285 %U https://proceedings.mlr.press/v132/wu21a.html %V 132 %X We study the problem of online learning a hypothesis class and a given binary 0-1 loss function, using instances generated $i.i.d.$ by a given distribution. The goal of the online learner is to make only finitely many errors (loss 1) with probability $1$ in the infinite horizon. In the binary label case, we show that hypothesis classes are online learnable in the above sense if and only if the class is effectively countable. We extend the results to hypothesis classes where labels can be non-binary. Characterization of non-binary online learnable classes is more involved for general loss functions and is not captured fully by the countability condition even for the ternary label case. In the computational bounded setup, we compare our results with well known results in recursive function learning, showing that the class of all total computable functions is indeed learnable with computable online learners and randomized sampling. Finally, we also show that the finite error guarantee will not be affected even when independent noise is added to the label.
APA
Wu, C. & Santhanam, N.. (2021). Non-uniform Consistency of Online Learning with Random Sampling. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:1265-1285 Available from https://proceedings.mlr.press/v132/wu21a.html.

Related Material