The Power of Random Counterexamples
[edit]
Proceedings of the 28th International Conference on Algorithmic Learning Theory, PMLR 76:452465, 2017.
Abstract
Learning a target concept from a finite $n \times m$ concept space requires $\Omega{(n)}$ proper equivalence queries in the worst case. We propose a variation of the usual equivalence query in which the teacher is constrained to choose counterexamples randomly from a known probability distribution on examples. We present and analyze the MaxMin learning algorithm, which identifies an arbitrary target concept in an arbitrary finite $n \times m$ concept space using at most an expected $\log_2{n}$ proper equivalence queries with random counterexamples.
Related Material


