Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Nikos Zarifis
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1514-1539, 2020.

Abstract

We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by Klivans (2017). Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-diakonikolas20d, title = {Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks}, author = {Diakonikolas, Ilias and Kane, Daniel M. and Kontonis, Vasilis and Zarifis, Nikos}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {1514--1539}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/diakonikolas20d/diakonikolas20d.pdf}, url = {https://proceedings.mlr.press/v125/diakonikolas20d.html}, abstract = { We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by Klivans (2017). Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions. } }
Endnote
%0 Conference Paper %T Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks %A Ilias Diakonikolas %A Daniel M. Kane %A Vasilis Kontonis %A Nikos Zarifis %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-diakonikolas20d %I PMLR %P 1514--1539 %U https://proceedings.mlr.press/v125/diakonikolas20d.html %V 125 %X We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by Klivans (2017). Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.
APA
Diakonikolas, I., Kane, D.M., Kontonis, V. & Zarifis, N.. (2020). Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:1514-1539 Available from https://proceedings.mlr.press/v125/diakonikolas20d.html.

Related Material