Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas

[edit]

Vitaly Feldman, Homin K. Lee, Rocco A. Servedio ;
Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:273-292, 2011.

Abstract

Much work has been done on learning various classes of “simple"monotone functions under the uniform distribution.In this paper we give the first unconditional lower bounds for learningproblems of this sort by showing that polynomial-time algorithms cannotlearn shallow monotone Boolean formulas under the uniform distributionin the well-studied Statistical Query (SQ) model.We introduce a new approach to understanding the learnability of “simple” monotone functions that is based ona recent characterization of Strong SQ learnability by \citetSimon-2007Using the characterization we first show that depth-3 monotone formulas of size n^o(1) cannot be learned byany polynomial-time SQ algorithm to accuracy 1 -1/(\log n)^Ω(1). We then build on this result to show thatdepth-4 monotone formulas of size n^o(1) cannot be learned evento a certain \frac 1 2 + o(1) accuracy in polynomial time. Thisimproved hardness is achieved using a general technique that weintroduce for amplifying the hardness of “mildly hard” learningproblems in either the PAC or SQ framework. Thishardness amplification for learning builds on the ideas in the work of\citetODonnell-2002on hardness amplification forapproximating functions using small circuits, and is applicable to a number of other contexts.Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.\ignoreText version:Much work has been done on learning various classes of “simple"monotone functions under the uniform distribution.In this paper we give the first unconditional lower bounds for learningproblems of this sort by showing that polynomial-time algorithms cannotlearn constant-depth monotone Boolean formulas under the uniform distributionin the well-studied Statistical Query model.Using a recent characterization of Strong Statistical Querylearnability due to Feldman (FOCS 2009), we first show thatdepth-3 monotone formulas of size n^o(1) cannot be learned byany polynomial-time Statistical Query algorithm to accuracy 1 -1/(\log n)^Ω(1). We then build on this result to show thatdepth-4 monotone formulas of size n^o(1) cannot be learned evento a certain 1/2 + o(1) accuracy in polynomial time. Thisimproved hardness is achieved using a general technique that weintroduce for amplifying the hardness of “mildly hard” learningproblems in either the PAC or Statistical Query framework. Thishardness amplification for learning builds on the ideas in the work ofO’Donnell (STOC 2002) on hardness amplification forapproximating functions using small circuits, and is applicable to a number of other contexts.Finally, we demonstrate that our technique can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.

Related Material