On Learning vs. Refutation
[edit]
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:18351848, 2017.
Abstract
Building on the work of Daniely et al. (STOC 2014, COLT 2016), we study the connection between computationally efficient PAC learning and refutation of constraint satisfaction problems. Specifically, we prove that for every concept class $\mathcal{P}$, PAClearning $\mathcal{P}$ is \em polynomially equivalent to “randomrighthandsiderefuting” (“RRHSrefuting”) a dual class $\mathcal{P}^*$, where RRHSrefutation of a class $\mathcal{Q}$ refers to refuting systems of equations where the constraints are (worstcase) functions from the class $\mathcal{Q}$ but the righthandsides of the equations are uniform and independent random bits. The reduction from refutation to PAC learning can be viewed as an abstraction of (part of) the work of Daniely, Linial, and ShalevSchwartz (STOC 2014). The converse, however, is new, and is based on a combination of techniques from pseudorandomness (Yao ‘82) with boosting (Schapire ‘90). In addition, we show that PAClearning the class of $\mathit{DNF}$ formulas is polynomially equivalent to PAClearning its dual class $\mathit{DNF}^*$, and thus PAClearning $\mathit{DNF}$ is equivalent to RRHSrefutation of $\mathit{DNF}$, suggesting an avenue to obtain stronger lower bounds for PAClearning $\mathit{DNF}$ than the quasipolynomial lower bound that was obtained by Daniely and ShalevSchwartz (COLT 2016) assuming the hardness of refuting $k$SAT.
Related Material


