Improved Hardness Results for Learning Intersections of Halfspaces

Stefan Tiegel
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4764-4786, 2024.

Abstract

We show strong (and surprisingly simple) lower bounds for weakly learning intersections of halfspaces in the improper setting. Strikingly little is known about this problem. For instance, it is not even known if there is a polynomial-time algorithm for learning the intersection of only two halfspaces. On the other hand, lower bounds based on well-established assumptions (such as approximating worst-case lattice problems or variants of Feige’s 3SAT hypothesis) are only known (or are implied by existing results) for the intersection of super-logarithmically many halfspaces (KS06, KS09, DS16). With intersections of fewer halfspaces being only ruled out under less standard assumptions (DV21) (such as the existence of local pseudo-random generators with large stretch). We significantly narrow this gap by showing that even learning $\omega(\log \log N)$ halfspaces in dimension $N$ takes super-polynomial time under standard assumptions on worst-case lattice problems (namely that SVP and SIVP are hard to approximate within polynomial factors). Further, we give unconditional hardness results in the statistical query framework. Specifically, we show that for any $k$ (even constant), learning $k$ halfspaces in dimension $N$ requires accuracy $N^{-\Omega(k)}$, or exponentially many queries – in particular ruling out SQ algorithms with polynomial accuracy for $\omega(1)$ halfspaces. To the best of our knowledge this is the first unconditional hardness result for learning a super-constant number of halfspaces. Our lower bounds are obtained in a unified way via a novel connection we make between intersections of halfspaces and the so-called parallel pancakes distribution (DKS17, PLBR19, BRST21) that has been at the heart of many lower bound constructions in (robust) high-dimensional statistics in the past few years.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-tiegel24a, title = {Improved Hardness Results for Learning Intersections of Halfspaces}, author = {Tiegel, Stefan}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {4764--4786}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/tiegel24a/tiegel24a.pdf}, url = {https://proceedings.mlr.press/v247/tiegel24a.html}, abstract = {We show strong (and surprisingly simple) lower bounds for weakly learning intersections of halfspaces in the improper setting. Strikingly little is known about this problem. For instance, it is not even known if there is a polynomial-time algorithm for learning the intersection of only two halfspaces. On the other hand, lower bounds based on well-established assumptions (such as approximating worst-case lattice problems or variants of Feige’s 3SAT hypothesis) are only known (or are implied by existing results) for the intersection of super-logarithmically many halfspaces (KS06, KS09, DS16). With intersections of fewer halfspaces being only ruled out under less standard assumptions (DV21) (such as the existence of local pseudo-random generators with large stretch). We significantly narrow this gap by showing that even learning $\omega(\log \log N)$ halfspaces in dimension $N$ takes super-polynomial time under standard assumptions on worst-case lattice problems (namely that SVP and SIVP are hard to approximate within polynomial factors). Further, we give unconditional hardness results in the statistical query framework. Specifically, we show that for any $k$ (even constant), learning $k$ halfspaces in dimension $N$ requires accuracy $N^{-\Omega(k)}$, or exponentially many queries – in particular ruling out SQ algorithms with polynomial accuracy for $\omega(1)$ halfspaces. To the best of our knowledge this is the first unconditional hardness result for learning a super-constant number of halfspaces. Our lower bounds are obtained in a unified way via a novel connection we make between intersections of halfspaces and the so-called parallel pancakes distribution (DKS17, PLBR19, BRST21) that has been at the heart of many lower bound constructions in (robust) high-dimensional statistics in the past few years.} }
Endnote
%0 Conference Paper %T Improved Hardness Results for Learning Intersections of Halfspaces %A Stefan Tiegel %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-tiegel24a %I PMLR %P 4764--4786 %U https://proceedings.mlr.press/v247/tiegel24a.html %V 247 %X We show strong (and surprisingly simple) lower bounds for weakly learning intersections of halfspaces in the improper setting. Strikingly little is known about this problem. For instance, it is not even known if there is a polynomial-time algorithm for learning the intersection of only two halfspaces. On the other hand, lower bounds based on well-established assumptions (such as approximating worst-case lattice problems or variants of Feige’s 3SAT hypothesis) are only known (or are implied by existing results) for the intersection of super-logarithmically many halfspaces (KS06, KS09, DS16). With intersections of fewer halfspaces being only ruled out under less standard assumptions (DV21) (such as the existence of local pseudo-random generators with large stretch). We significantly narrow this gap by showing that even learning $\omega(\log \log N)$ halfspaces in dimension $N$ takes super-polynomial time under standard assumptions on worst-case lattice problems (namely that SVP and SIVP are hard to approximate within polynomial factors). Further, we give unconditional hardness results in the statistical query framework. Specifically, we show that for any $k$ (even constant), learning $k$ halfspaces in dimension $N$ requires accuracy $N^{-\Omega(k)}$, or exponentially many queries – in particular ruling out SQ algorithms with polynomial accuracy for $\omega(1)$ halfspaces. To the best of our knowledge this is the first unconditional hardness result for learning a super-constant number of halfspaces. Our lower bounds are obtained in a unified way via a novel connection we make between intersections of halfspaces and the so-called parallel pancakes distribution (DKS17, PLBR19, BRST21) that has been at the heart of many lower bound constructions in (robust) high-dimensional statistics in the past few years.
APA
Tiegel, S.. (2024). Improved Hardness Results for Learning Intersections of Halfspaces. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:4764-4786 Available from https://proceedings.mlr.press/v247/tiegel24a.html.

Related Material