Learning Narrow One-Hidden-Layer ReLU Networks

Sitan Chen, Zehao Dou, Surbhi Goel, Adam Klivans, Raghu Meka
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5580-5614, 2023.

Abstract

We consider the well-studied problem of learning a linear combination of $k$ ReLU activations with respect to a Gaussian distribution on inputs in $d$ dimensions. We give the first polynomial-time algorithm that succeeds whenever $k$ is a constant. All prior polynomial-time learners require additional assumptions on the network, such as positive combining coefficients or the matrix of hidden weight vectors being well-conditioned.Our approach is based on analyzing random contractions of higher-order moment tensors. We use a multi-scale clustering procedure to argue that sufficiently close neurons can be collapsed together, sidestepping the conditioning issues present in prior work. This allows us to design an iterative procedure to discover individual neurons.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-chen23a, title = {Learning Narrow One-Hidden-Layer ReLU Networks}, author = {Chen, Sitan and Dou, Zehao and Goel, Surbhi and Klivans, Adam and Meka, Raghu}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5580--5614}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/chen23a/chen23a.pdf}, url = {https://proceedings.mlr.press/v195/chen23a.html}, abstract = {We consider the well-studied problem of learning a linear combination of $k$ ReLU activations with respect to a Gaussian distribution on inputs in $d$ dimensions. We give the first polynomial-time algorithm that succeeds whenever $k$ is a constant. All prior polynomial-time learners require additional assumptions on the network, such as positive combining coefficients or the matrix of hidden weight vectors being well-conditioned.Our approach is based on analyzing random contractions of higher-order moment tensors. We use a multi-scale clustering procedure to argue that sufficiently close neurons can be collapsed together, sidestepping the conditioning issues present in prior work. This allows us to design an iterative procedure to discover individual neurons.} }
Endnote
%0 Conference Paper %T Learning Narrow One-Hidden-Layer ReLU Networks %A Sitan Chen %A Zehao Dou %A Surbhi Goel %A Adam Klivans %A Raghu Meka %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-chen23a %I PMLR %P 5580--5614 %U https://proceedings.mlr.press/v195/chen23a.html %V 195 %X We consider the well-studied problem of learning a linear combination of $k$ ReLU activations with respect to a Gaussian distribution on inputs in $d$ dimensions. We give the first polynomial-time algorithm that succeeds whenever $k$ is a constant. All prior polynomial-time learners require additional assumptions on the network, such as positive combining coefficients or the matrix of hidden weight vectors being well-conditioned.Our approach is based on analyzing random contractions of higher-order moment tensors. We use a multi-scale clustering procedure to argue that sufficiently close neurons can be collapsed together, sidestepping the conditioning issues present in prior work. This allows us to design an iterative procedure to discover individual neurons.
APA
Chen, S., Dou, Z., Goel, S., Klivans, A. & Meka, R.. (2023). Learning Narrow One-Hidden-Layer ReLU Networks. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5580-5614 Available from https://proceedings.mlr.press/v195/chen23a.html.

Related Material