On the Approximation Power of Two-Layer Networks of Random ReLUs

Daniel Hsu, Clayton H Sanford, Rocco Servedio, Emmanouil Vasileios Vlatakis-Gkaragkounis
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:2423-2461, 2021.

Abstract

This paper considers the following question: how well can depth-two ReLU networks with randomly initialized bottom-level weights represent smooth functions? We give near-matching upper- and lower-bounds for L2-approximation in terms of the Lipschitz constant, the desired accuracy, and the dimension of the problem, as well as similar results in terms of Sobolev norms. Our positive results employ tools from harmonic analysis and ridgelet representation theory, while our lower-bounds are based on (robust versions of) dimensionality arguments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-hsu21a, title = {On the Approximation Power of Two-Layer Networks of Random ReLUs}, author = {Hsu, Daniel and Sanford, Clayton H and Servedio, Rocco and Vlatakis-Gkaragkounis, Emmanouil Vasileios}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {2423--2461}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/hsu21a/hsu21a.pdf}, url = {https://proceedings.mlr.press/v134/hsu21a.html}, abstract = {This paper considers the following question: how well can depth-two ReLU networks with randomly initialized bottom-level weights represent smooth functions? We give near-matching upper- and lower-bounds for L2-approximation in terms of the Lipschitz constant, the desired accuracy, and the dimension of the problem, as well as similar results in terms of Sobolev norms. Our positive results employ tools from harmonic analysis and ridgelet representation theory, while our lower-bounds are based on (robust versions of) dimensionality arguments.} }
Endnote
%0 Conference Paper %T On the Approximation Power of Two-Layer Networks of Random ReLUs %A Daniel Hsu %A Clayton H Sanford %A Rocco Servedio %A Emmanouil Vasileios Vlatakis-Gkaragkounis %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-hsu21a %I PMLR %P 2423--2461 %U https://proceedings.mlr.press/v134/hsu21a.html %V 134 %X This paper considers the following question: how well can depth-two ReLU networks with randomly initialized bottom-level weights represent smooth functions? We give near-matching upper- and lower-bounds for L2-approximation in terms of the Lipschitz constant, the desired accuracy, and the dimension of the problem, as well as similar results in terms of Sobolev norms. Our positive results employ tools from harmonic analysis and ridgelet representation theory, while our lower-bounds are based on (robust versions of) dimensionality arguments.
APA
Hsu, D., Sanford, C.H., Servedio, R. & Vlatakis-Gkaragkounis, E.V.. (2021). On the Approximation Power of Two-Layer Networks of Random ReLUs. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:2423-2461 Available from https://proceedings.mlr.press/v134/hsu21a.html.

Related Material