[edit]

# Open Problem: Are all VC-classes CPAC learnable?

*Proceedings of Thirty Fourth Conference on Learning Theory*, PMLR 134:4636-4641, 2021.

#### Abstract

A few years ago, it was shown that there exist basic statistical learning problems whose learnability can not be determined within ZFC [Ben-David, Hrubes, Moran, Shpilka, Yehudayoff, 2017]. Such independence, and the implied impossibility of characterizing learnability of a class by any combinatorial parameter, stems from the basic definitions viewing learners as arbitrary functions. That level of generality not only results in unprovability issues but is also problematic from the perspective of modeling practical machine learning, where learners and predictors are computable objects. In light of that, it is natural to consider learnability by algorithms that output computable predictors (both learners and predictors are then representable as finite objects). A recent study [Agarwal, Ananthakrishnan, Ben-David, Lechner and Urner, 2020] initiated a theory of such models of learning. It proposed the notion of CPAC learnability, by adding some basic computability requirements into a PAC learning framework. As a first step towards a characterization of learnability in the CPAC framework, Agarwal et al showed that CPAC learnability of a binary hypothesis class is not implied by the finiteness of its VC-dimension anymore, as far as proper learners are concerned. A major remaining open question is whether a similar result holds also for improper learning. Namely, does there exist a computable concept class consisting of computable classifiers, that has a finite VC-dimension but no computable learner can PAC learn it (even if the learner is not restricted to output a hypothesis that is a member of the class)? Another implied interesting question concerns coming up with combinatorial characterizations of learnability for computable learners.