[edit]
New Bounds for Learning Intervals with Implications for Semi-Supervised Learning
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:30.1-30.15, 2012.
Abstract
We study learning of initial intervals in the prediction model. We show that for each distribution \emphD over the domain, there is an algorithm \emphA_D, whose probability of a mistake in round m is at most \emph(½ + o(1))/m. We also show that the best possible bound that can be achieved in the case in which the same algorithm \emphA must be applied for all distributions \emphD is at least (^1⁄_√\emphe - o(1))^1⁄_\emphm > (^3⁄_5-o(1))^1⁄_\emphm. Informally, “knowing” the distribution \emphD enables an algorithm to reduce its error rate by a constant factor strictly greater than 1. As advocated by Ben-David et al. (2008), knowledge of \emphD can be viewed as an idealized proxy for a large number of unlabeled examples.