Open Problem: Are all VC-classes CPAC learnable?

Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, Ruth Urner
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:4636-4641, 2021.

Abstract

A few years ago, it was shown that there exist basic statistical learning problems whose learnability can not be determined within ZFC [Ben-David, Hrubes, Moran, Shpilka, Yehudayoff, 2017]. Such independence, and the implied impossibility of characterizing learnability of a class by any combinatorial parameter, stems from the basic definitions viewing learners as arbitrary functions. That level of generality not only results in unprovability issues but is also problematic from the perspective of modeling practical machine learning, where learners and predictors are computable objects. In light of that, it is natural to consider learnability by algorithms that output computable predictors (both learners and predictors are then representable as finite objects). A recent study [Agarwal, Ananthakrishnan, Ben-David, Lechner and Urner, 2020] initiated a theory of such models of learning. It proposed the notion of CPAC learnability, by adding some basic computability requirements into a PAC learning framework. As a first step towards a characterization of learnability in the CPAC framework, Agarwal et al showed that CPAC learnability of a binary hypothesis class is not implied by the finiteness of its VC-dimension anymore, as far as proper learners are concerned. A major remaining open question is whether a similar result holds also for improper learning. Namely, does there exist a computable concept class consisting of computable classifiers, that has a finite VC-dimension but no computable learner can PAC learn it (even if the learner is not restricted to output a hypothesis that is a member of the class)? Another implied interesting question concerns coming up with combinatorial characterizations of learnability for computable learners.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-open-problem-agarwal21b, title = {Open Problem: Are all VC-classes CPAC learnable?}, author = {Agarwal, Sushant and Ananthakrishnan, Nivasini and Ben-David, Shai and Lechner, Tosca and Urner, Ruth}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {4636--4641}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/agarwal21b/agarwal21b.pdf}, url = {https://proceedings.mlr.press/v134/open-problem-agarwal21b.html}, abstract = {A few years ago, it was shown that there exist basic statistical learning problems whose learnability can not be determined within ZFC [Ben-David, Hrubes, Moran, Shpilka, Yehudayoff, 2017]. Such independence, and the implied impossibility of characterizing learnability of a class by any combinatorial parameter, stems from the basic definitions viewing learners as arbitrary functions. That level of generality not only results in unprovability issues but is also problematic from the perspective of modeling practical machine learning, where learners and predictors are computable objects. In light of that, it is natural to consider learnability by algorithms that output computable predictors (both learners and predictors are then representable as finite objects). A recent study [Agarwal, Ananthakrishnan, Ben-David, Lechner and Urner, 2020] initiated a theory of such models of learning. It proposed the notion of CPAC learnability, by adding some basic computability requirements into a PAC learning framework. As a first step towards a characterization of learnability in the CPAC framework, Agarwal et al showed that CPAC learnability of a binary hypothesis class is not implied by the finiteness of its VC-dimension anymore, as far as proper learners are concerned. A major remaining open question is whether a similar result holds also for improper learning. Namely, does there exist a computable concept class consisting of computable classifiers, that has a finite VC-dimension but no computable learner can PAC learn it (even if the learner is not restricted to output a hypothesis that is a member of the class)? Another implied interesting question concerns coming up with combinatorial characterizations of learnability for computable learners.} }
Endnote
%0 Conference Paper %T Open Problem: Are all VC-classes CPAC learnable? %A Sushant Agarwal %A Nivasini Ananthakrishnan %A Shai Ben-David %A Tosca Lechner %A Ruth Urner %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-open-problem-agarwal21b %I PMLR %P 4636--4641 %U https://proceedings.mlr.press/v134/open-problem-agarwal21b.html %V 134 %X A few years ago, it was shown that there exist basic statistical learning problems whose learnability can not be determined within ZFC [Ben-David, Hrubes, Moran, Shpilka, Yehudayoff, 2017]. Such independence, and the implied impossibility of characterizing learnability of a class by any combinatorial parameter, stems from the basic definitions viewing learners as arbitrary functions. That level of generality not only results in unprovability issues but is also problematic from the perspective of modeling practical machine learning, where learners and predictors are computable objects. In light of that, it is natural to consider learnability by algorithms that output computable predictors (both learners and predictors are then representable as finite objects). A recent study [Agarwal, Ananthakrishnan, Ben-David, Lechner and Urner, 2020] initiated a theory of such models of learning. It proposed the notion of CPAC learnability, by adding some basic computability requirements into a PAC learning framework. As a first step towards a characterization of learnability in the CPAC framework, Agarwal et al showed that CPAC learnability of a binary hypothesis class is not implied by the finiteness of its VC-dimension anymore, as far as proper learners are concerned. A major remaining open question is whether a similar result holds also for improper learning. Namely, does there exist a computable concept class consisting of computable classifiers, that has a finite VC-dimension but no computable learner can PAC learn it (even if the learner is not restricted to output a hypothesis that is a member of the class)? Another implied interesting question concerns coming up with combinatorial characterizations of learnability for computable learners.
APA
Agarwal, S., Ananthakrishnan, N., Ben-David, S., Lechner, T. & Urner, R.. (2021). Open Problem: Are all VC-classes CPAC learnable?. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:4636-4641 Available from https://proceedings.mlr.press/v134/open-problem-agarwal21b.html.

Related Material