[edit]
Optimal sampling in unbiased active learning
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:559-569, 2020.
Abstract
A common belief in unbiased active learning is that, in order to capture the most informative instances, the sampling probabilities should be proportional to the uncertainty of the class labels. We argue that this produces suboptimal predictions and present sampling schemes for unbiased pool-based active learning that minimise the actual prediction error, and demonstrate a better predictive performance than competing methods on a number of benchmark datasets. In contrast, both probabilistic and deterministic uncertainty sampling performed worse than simple random sampling on some of the datasets.