Hardness of Improper One-Sided Learning of Conjunctions For All Uniformly Falsifiable CSPs

Alexander Durgin, Brendan Juba
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:369-382, 2019.

Abstract

We consider several closely related variants of PAC-learning in which false-positive and false-negative errors are treated differently. In these models we seek to guarantee a given, low rate of false-positive errors and asfew false-negative errors as possible given that we meet the false-positived constraint. Bshouty and Burroughs first observed that learning conjunctions in such models would enable PAC-learning of DNF in the usual distribution-free model; in turn, results of Daniely and Shalev-Shwartz establish that learning of DNF would imply algorithms for refuting random k-SAT using far fewer constraints than believed possible. Such algorithms would violate a slight strengthening of Feige’s R3SAT assumption, and would violate the RCSP hypothesis of Barak et al. We show here that actually, an algorithm for learning conjunctions in this model would have much more far-reaching consequences: it gives refutation algorithms for all predicates that are falsified by one of the uniform constant strings. To our knowledge, this is the first hardness result of improper learning for such a large class of natural average-case problems with natural distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-durgin19a, title = {Hardness of Improper One-Sided Learning of Conjunctions For All Uniformly Falsifiable CSPs}, author = {Durgin, Alexander and Juba, Brendan}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {369--382}, year = {2019}, editor = {Garivier, Aurélien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/durgin19a/durgin19a.pdf}, url = {https://proceedings.mlr.press/v98/durgin19a.html}, abstract = {We consider several closely related variants of PAC-learning in which false-positive and false-negative errors are treated differently. In these models we seek to guarantee a given, low rate of false-positive errors and asfew false-negative errors as possible given that we meet the false-positived constraint. Bshouty and Burroughs first observed that learning conjunctions in such models would enable PAC-learning of DNF in the usual distribution-free model; in turn, results of Daniely and Shalev-Shwartz establish that learning of DNF would imply algorithms for refuting random k-SAT using far fewer constraints than believed possible. Such algorithms would violate a slight strengthening of Feige’s R3SAT assumption, and would violate the RCSP hypothesis of Barak et al. We show here that actually, an algorithm for learning conjunctions in this model would have much more far-reaching consequences: it gives refutation algorithms for all predicates that are falsified by one of the uniform constant strings. To our knowledge, this is the first hardness result of improper learning for such a large class of natural average-case problems with natural distributions.} }
Endnote
%0 Conference Paper %T Hardness of Improper One-Sided Learning of Conjunctions For All Uniformly Falsifiable CSPs %A Alexander Durgin %A Brendan Juba %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-durgin19a %I PMLR %P 369--382 %U https://proceedings.mlr.press/v98/durgin19a.html %V 98 %X We consider several closely related variants of PAC-learning in which false-positive and false-negative errors are treated differently. In these models we seek to guarantee a given, low rate of false-positive errors and asfew false-negative errors as possible given that we meet the false-positived constraint. Bshouty and Burroughs first observed that learning conjunctions in such models would enable PAC-learning of DNF in the usual distribution-free model; in turn, results of Daniely and Shalev-Shwartz establish that learning of DNF would imply algorithms for refuting random k-SAT using far fewer constraints than believed possible. Such algorithms would violate a slight strengthening of Feige’s R3SAT assumption, and would violate the RCSP hypothesis of Barak et al. We show here that actually, an algorithm for learning conjunctions in this model would have much more far-reaching consequences: it gives refutation algorithms for all predicates that are falsified by one of the uniform constant strings. To our knowledge, this is the first hardness result of improper learning for such a large class of natural average-case problems with natural distributions.
APA
Durgin, A. & Juba, B.. (2019). Hardness of Improper One-Sided Learning of Conjunctions For All Uniformly Falsifiable CSPs. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 98:369-382 Available from https://proceedings.mlr.press/v98/durgin19a.html.

Related Material