Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Nikos Zarifis
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1514-1539, 2020.

Abstract

We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by Klivans (2017). Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-diakonikolas20d, title = {Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks}, author = {Diakonikolas, Ilias and Kane, Daniel M. and Kontonis, Vasilis and Zarifis, Nikos}, pages = {1514--1539}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/diakonikolas20d/diakonikolas20d.pdf}, url = {http://proceedings.mlr.press/v125/diakonikolas20d.html}, abstract = { We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by Klivans (2017). Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions. } }
Endnote
%0 Conference Paper %T Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks %A Ilias Diakonikolas %A Daniel M. Kane %A Vasilis Kontonis %A Nikos Zarifis %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-diakonikolas20d %I PMLR %J Proceedings of Machine Learning Research %P 1514--1539 %U http://proceedings.mlr.press %V 125 %W PMLR %X We study the problem of PAC learning one-hidden-layer ReLU networks with $k$ hidden units on $\mathbb{R}^d$ under Gaussian marginals in the presence of additive label noise. For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$. Previously, no polynomial time algorithm was known, even for $k=3$. This answers an open question posed by Klivans (2017). Importantly, our algorithm does not require any assumptions about the rank of the weight matrix and its complexity is independent of its condition number. On the negative side, for the more general task of PAC learning one-hidden-layer ReLU networks with arbitrary real coefficients, we prove a Statistical Query lower bound of $d^{\Omega(k)}$. Thus, we provide a separation between the two classes in terms of efficient learnability. Our upper and lower bounds are general, extending to broader families of activation functions.
APA
Diakonikolas, I., Kane, D.M., Kontonis, V. & Zarifis, N.. (2020). Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:1514-1539

Related Material