[edit]
Non-uniform Consistency of Online Learning with Random Sampling
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:1265-1285, 2021.
Abstract
We study the problem of online learning a hypothesis class and a given binary 0-1 loss function, using instances generated $i.i.d.$ by a given distribution. The goal of the online learner is to make only finitely many errors (loss 1) with probability $1$ in the infinite horizon. In the binary label case, we show that hypothesis classes are online learnable in the above sense if and only if the class is effectively countable. We extend the results to hypothesis classes where labels can be non-binary. Characterization of non-binary online learnable classes is more involved for general loss functions and is not captured fully by the countability condition even for the ternary label case. In the computational bounded setup, we compare our results with well known results in recursive function learning, showing that the class of all total computable functions is indeed learnable with computable online learners and randomized sampling. Finally, we also show that the finite error guarantee will not be affected even when independent noise is added to the label.