Hardness of Learning Noisy Halfspaces using Polynomial Thresholds

Arnab Bhattacharyya, Suprovat Ghoshal, Rishi Saket
Proceedings of the 31st Conference On Learning Theory, PMLR 75:876-917, 2018.

Abstract

We prove the hardness of weakly learning halfspaces in the presence of adversarial noise using polynomial threshold functions (PTFs). In particular, we prove that for any constants $d \in \mathbb{Z}^+$ and $\eps > 0$, it is NP-hard to decide: given a set of $\{-1,1\}$-labeled points in $\mathbb{R}^n$ whether (YES Case) there exists a halfspace that classifies $(1-\eps)$-fraction of the points correctly, or (NO Case) any degree-$d$ PTF classifies at most $(1/2 + \eps)$-fraction of the points correctly. This strengthens to all constant degrees the previous NP-hardness of learning using degree-$2$ PTFs shown by Diakonikolas et al. (2011). The latter result had remained the only progress over the works of Feldman et al. (2006) and Guruswami et al. (2006) ruling out weakly proper learning adversarially noisy halfspaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-bhattacharyya18a, title = {Hardness of Learning Noisy Halfspaces using Polynomial Thresholds}, author = {Bhattacharyya, Arnab and Ghoshal, Suprovat and Saket, Rishi}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {876--917}, year = {2018}, editor = {Bubeck, S├ębastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/bhattacharyya18a/bhattacharyya18a.pdf}, url = {https://proceedings.mlr.press/v75/bhattacharyya18a.html}, abstract = {We prove the hardness of weakly learning halfspaces in the presence of adversarial noise using polynomial threshold functions (PTFs). In particular, we prove that for any constants $d \in \mathbb{Z}^+$ and $\eps > 0$, it is NP-hard to decide: given a set of $\{-1,1\}$-labeled points in $\mathbb{R}^n$ whether (YES Case) there exists a halfspace that classifies $(1-\eps)$-fraction of the points correctly, or (NO Case) any degree-$d$ PTF classifies at most $(1/2 + \eps)$-fraction of the points correctly. This strengthens to all constant degrees the previous NP-hardness of learning using degree-$2$ PTFs shown by Diakonikolas et al. (2011). The latter result had remained the only progress over the works of Feldman et al. (2006) and Guruswami et al. (2006) ruling out weakly proper learning adversarially noisy halfspaces.} }
Endnote
%0 Conference Paper %T Hardness of Learning Noisy Halfspaces using Polynomial Thresholds %A Arnab Bhattacharyya %A Suprovat Ghoshal %A Rishi Saket %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E S├ębastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-bhattacharyya18a %I PMLR %P 876--917 %U https://proceedings.mlr.press/v75/bhattacharyya18a.html %V 75 %X We prove the hardness of weakly learning halfspaces in the presence of adversarial noise using polynomial threshold functions (PTFs). In particular, we prove that for any constants $d \in \mathbb{Z}^+$ and $\eps > 0$, it is NP-hard to decide: given a set of $\{-1,1\}$-labeled points in $\mathbb{R}^n$ whether (YES Case) there exists a halfspace that classifies $(1-\eps)$-fraction of the points correctly, or (NO Case) any degree-$d$ PTF classifies at most $(1/2 + \eps)$-fraction of the points correctly. This strengthens to all constant degrees the previous NP-hardness of learning using degree-$2$ PTFs shown by Diakonikolas et al. (2011). The latter result had remained the only progress over the works of Feldman et al. (2006) and Guruswami et al. (2006) ruling out weakly proper learning adversarially noisy halfspaces.
APA
Bhattacharyya, A., Ghoshal, S. & Saket, R.. (2018). Hardness of Learning Noisy Halfspaces using Polynomial Thresholds. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:876-917 Available from https://proceedings.mlr.press/v75/bhattacharyya18a.html.

Related Material