Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise

Ilias Diakonikolas, Daniel Kane
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4258-4282, 2022.

Abstract

We study the problem of PAC learning halfspaces with Massart noise. Given labeled samples $(x, y)$ from a distribution $D$ on $\R^{d} \times \{ \pm 1\}$ such that the marginal $D_x$ on the examples is arbitrary and the label $y$ of example $x$ is generated from the target halfspace corrupted by a Massart adversary with flipping probability $\eta(x) \leq \eta \leq 1/2$, the goal is to compute a hypothesis with small misclassification error. The best known $\poly(d, 1/\eps)$-time algorithms for this problem achieve error of $\eta+\eps$, which can be far from the optimal bound of $\opt+\eps$, where $\opt = \E_{x \sim D_x} [\eta(x)]$. While it is known that achieving $\opt+o(1)$ error requires super-polynomial time in the Statistical Query model, a large gap remains between known upper and lower bounds. In this work, we essentially characterize the efficient learnability of Massart halfspaces in the Statistical Query (SQ) model. Specifically, we show that no efficient SQ algorithm for learning Massart halfspaces on $\R^d$ can achieve error better than $\Omega(\eta)$, even if $\opt = 2^{-\log^{c} (d)}$, for any universal constant $c \in (0, 1)$. Furthermore, when the noise upper bound $\eta$ is close to $1/2$, our error lower bound becomes $\eta - o_{\eta}(1)$, where the $o_{\eta}(1)$ term goes to $0$ when $\eta$ approaches $1/2$. Our results provide strong evidence that known learning algorithms for Massart halfspaces are nearly best possible.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-diakonikolas22b, title = {Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise}, author = {Diakonikolas, Ilias and Kane, Daniel}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {4258--4282}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/diakonikolas22b/diakonikolas22b.pdf}, url = {https://proceedings.mlr.press/v178/diakonikolas22b.html}, abstract = {We study the problem of PAC learning halfspaces with Massart noise. Given labeled samples $(x, y)$ from a distribution $D$ on $\R^{d} \times \{ \pm 1\}$ such that the marginal $D_x$ on the examples is arbitrary and the label $y$ of example $x$ is generated from the target halfspace corrupted by a Massart adversary with flipping probability $\eta(x) \leq \eta \leq 1/2$, the goal is to compute a hypothesis with small misclassification error. The best known $\poly(d, 1/\eps)$-time algorithms for this problem achieve error of $\eta+\eps$, which can be far from the optimal bound of $\opt+\eps$, where $\opt = \E_{x \sim D_x} [\eta(x)]$. While it is known that achieving $\opt+o(1)$ error requires super-polynomial time in the Statistical Query model, a large gap remains between known upper and lower bounds. In this work, we essentially characterize the efficient learnability of Massart halfspaces in the Statistical Query (SQ) model. Specifically, we show that no efficient SQ algorithm for learning Massart halfspaces on $\R^d$ can achieve error better than $\Omega(\eta)$, even if $\opt = 2^{-\log^{c} (d)}$, for any universal constant $c \in (0, 1)$. Furthermore, when the noise upper bound $\eta$ is close to $1/2$, our error lower bound becomes $\eta - o_{\eta}(1)$, where the $o_{\eta}(1)$ term goes to $0$ when $\eta$ approaches $1/2$. Our results provide strong evidence that known learning algorithms for Massart halfspaces are nearly best possible.} }
Endnote
%0 Conference Paper %T Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise %A Ilias Diakonikolas %A Daniel Kane %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-diakonikolas22b %I PMLR %P 4258--4282 %U https://proceedings.mlr.press/v178/diakonikolas22b.html %V 178 %X We study the problem of PAC learning halfspaces with Massart noise. Given labeled samples $(x, y)$ from a distribution $D$ on $\R^{d} \times \{ \pm 1\}$ such that the marginal $D_x$ on the examples is arbitrary and the label $y$ of example $x$ is generated from the target halfspace corrupted by a Massart adversary with flipping probability $\eta(x) \leq \eta \leq 1/2$, the goal is to compute a hypothesis with small misclassification error. The best known $\poly(d, 1/\eps)$-time algorithms for this problem achieve error of $\eta+\eps$, which can be far from the optimal bound of $\opt+\eps$, where $\opt = \E_{x \sim D_x} [\eta(x)]$. While it is known that achieving $\opt+o(1)$ error requires super-polynomial time in the Statistical Query model, a large gap remains between known upper and lower bounds. In this work, we essentially characterize the efficient learnability of Massart halfspaces in the Statistical Query (SQ) model. Specifically, we show that no efficient SQ algorithm for learning Massart halfspaces on $\R^d$ can achieve error better than $\Omega(\eta)$, even if $\opt = 2^{-\log^{c} (d)}$, for any universal constant $c \in (0, 1)$. Furthermore, when the noise upper bound $\eta$ is close to $1/2$, our error lower bound becomes $\eta - o_{\eta}(1)$, where the $o_{\eta}(1)$ term goes to $0$ when $\eta$ approaches $1/2$. Our results provide strong evidence that known learning algorithms for Massart halfspaces are nearly best possible.
APA
Diakonikolas, I. & Kane, D.. (2022). Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:4258-4282 Available from https://proceedings.mlr.press/v178/diakonikolas22b.html.

Related Material