[edit]

# Hardness of Learning a Single Neuron with Adversarial Label Noise

*Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*, PMLR 151:8199-8213, 2022.

#### Abstract

We study the problem of distribution-free learning of a single neuron under adversarial label noise with respect to the squared loss. For a wide range of activation functions, including ReLUs and sigmoids, we prove hardness of learning results in the Statistical Query model and under a well-studied assumption on the complexity of refuting XOR formulas. Specifically, we establish that no polynomial-time learning algorithm, even improper, can approximate the optimal loss value within any constant factor.