[edit]
Efficiently Learning One-Hidden-Layer ReLU Networks via SchurPolynomials
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1364-1378, 2024.
Abstract
We study the problem of PAC learning a linear combination of k ReLU activations under the standard Gaussian distribution on Rd with respect to the square loss. Our main result is an efficient algorithm for this learning task with sample and computational complexity (dk/ϵ)O(k), where ϵ>0 is the target accuracy. Prior work had given an algorithm for this problem with complexity (dk/ϵ)h(k), where the function h(k) scales super-polynomially in k. Interestingly, the complexity of our algorithm is near-optimal within the class of Correlational Statistical Query algorithms. At a high-level, our algorithm uses tensor decomposition to identify a subspace such that all the O(k)-order moments are small in the orthogonal directions. Its analysis makes essential use of the theory of Schur polynomials to show that the higher-moment error tensors are small given that the lower-order ones are.