The Sample Complexity of Self-Verifying Bayesian Active Learning

[edit]

Liu Yang, Steve Hanneke, Jaime Carbonell ;
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:816-822, 2011.

Abstract

We prove that access to a prior distribution over target functions can dramatically improve the sample complexity of self-terminating active learning algorithms, so that it is always better than the known results for prior-dependent passive learning. In particular, this is in stark contrast to the analysis of prior-independent algorithms, where there are simple known learning problems for which no self-terminating algorithm can provide this guarantee for all priors. [pdf]

Related Material