Hardness of Improper OneSided Learning of Conjunctions For All Uniformly Falsifiable CSPs
[edit]
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:369382, 2019.
Abstract
We consider several closely related variants of PAClearning in which falsepositive and falsenegative errors are treated differently. In these models we seek to guarantee a given, low rate of falsepositive errors and asfew falsenegative errors as possible given that we meet the falsepositived constraint. Bshouty and Burroughs first observed that learning conjunctions in such models would enable PAClearning of DNF in the usual distributionfree model; in turn, results of Daniely and ShalevShwartz establish that learning of DNF would imply algorithms for refuting random kSAT using far fewer constraints than believed possible. Such algorithms would violate a slight strengthening of Feige’s R3SAT assumption, and would violate the RCSP hypothesis of Barak et al. We show here that actually, an algorithm for learning conjunctions in this model would have much more farreaching consequences: it gives refutation algorithms for all predicates that are falsified by one of the uniform constant strings. To our knowledge, this is the first hardness result of improper learning for such a large class of natural averagecase problems with natural distributions.
Related Material


