Bayes-optimal Learning of Deep Random Networks of Extensive-width

Hugo Cui, Florent Krzakala, Lenka Zdeborova
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:6468-6521, 2023.

Abstract

We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width are proportionally large and propose a closed-form expression for the Bayes-optimal test error, for regression and classification tasks. We further compute closed-form expressions for the test errors of ridge regression, kernel and random features regression. We find, in particular, that optimally regularized ridge regression, as well as kernel regression, achieve Bayes-optimal performances, while the logistic loss yields a near-optimal test error for classification. We further show numerically that when the number of samples grows faster than the dimension, ridge and kernel methods become suboptimal, while neural networks achieve test error close to zero from quadratically many samples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-cui23b, title = {{B}ayes-optimal Learning of Deep Random Networks of Extensive-width}, author = {Cui, Hugo and Krzakala, Florent and Zdeborova, Lenka}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {6468--6521}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/cui23b/cui23b.pdf}, url = {https://proceedings.mlr.press/v202/cui23b.html}, abstract = {We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width are proportionally large and propose a closed-form expression for the Bayes-optimal test error, for regression and classification tasks. We further compute closed-form expressions for the test errors of ridge regression, kernel and random features regression. We find, in particular, that optimally regularized ridge regression, as well as kernel regression, achieve Bayes-optimal performances, while the logistic loss yields a near-optimal test error for classification. We further show numerically that when the number of samples grows faster than the dimension, ridge and kernel methods become suboptimal, while neural networks achieve test error close to zero from quadratically many samples.} }
Endnote
%0 Conference Paper %T Bayes-optimal Learning of Deep Random Networks of Extensive-width %A Hugo Cui %A Florent Krzakala %A Lenka Zdeborova %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-cui23b %I PMLR %P 6468--6521 %U https://proceedings.mlr.press/v202/cui23b.html %V 202 %X We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights. We consider the asymptotic limit where the number of samples, the input dimension and the network width are proportionally large and propose a closed-form expression for the Bayes-optimal test error, for regression and classification tasks. We further compute closed-form expressions for the test errors of ridge regression, kernel and random features regression. We find, in particular, that optimally regularized ridge regression, as well as kernel regression, achieve Bayes-optimal performances, while the logistic loss yields a near-optimal test error for classification. We further show numerically that when the number of samples grows faster than the dimension, ridge and kernel methods become suboptimal, while neural networks achieve test error close to zero from quadratically many samples.
APA
Cui, H., Krzakala, F. & Zdeborova, L.. (2023). Bayes-optimal Learning of Deep Random Networks of Extensive-width. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:6468-6521 Available from https://proceedings.mlr.press/v202/cui23b.html.

Related Material