Risk bounds for aggregated shallow neural networks using Gaussian priors

Laura Tinsi, Arnak Dalalyan
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:227-253, 2022.

Abstract

Analysing statistical properties of neural networks is a central topic in statistics and machine learning. However, most results in the literature focus on the properties of the neural network minimizing the training error. The goal of this paper is to consider aggregated neural networks using a Gaussian prior. The departure point of our approach is an arbitrary aggregate satisfying the PAC-Bayesian inequality. The main contribution is a precise nonasymptotic assessment of the estimation error appearing in the PAC-Bayes bound. Our analysis is sharp enough to lead to minimax rates of estimation over Sobolev smoothness classes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-tinsi22a, title = {Risk bounds for aggregated shallow neural networks using Gaussian priors}, author = {Tinsi, Laura and Dalalyan, Arnak}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {227--253}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/tinsi22a/tinsi22a.pdf}, url = {https://proceedings.mlr.press/v178/tinsi22a.html}, abstract = {Analysing statistical properties of neural networks is a central topic in statistics and machine learning. However, most results in the literature focus on the properties of the neural network minimizing the training error. The goal of this paper is to consider aggregated neural networks using a Gaussian prior. The departure point of our approach is an arbitrary aggregate satisfying the PAC-Bayesian inequality. The main contribution is a precise nonasymptotic assessment of the estimation error appearing in the PAC-Bayes bound. Our analysis is sharp enough to lead to minimax rates of estimation over Sobolev smoothness classes.} }
Endnote
%0 Conference Paper %T Risk bounds for aggregated shallow neural networks using Gaussian priors %A Laura Tinsi %A Arnak Dalalyan %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-tinsi22a %I PMLR %P 227--253 %U https://proceedings.mlr.press/v178/tinsi22a.html %V 178 %X Analysing statistical properties of neural networks is a central topic in statistics and machine learning. However, most results in the literature focus on the properties of the neural network minimizing the training error. The goal of this paper is to consider aggregated neural networks using a Gaussian prior. The departure point of our approach is an arbitrary aggregate satisfying the PAC-Bayesian inequality. The main contribution is a precise nonasymptotic assessment of the estimation error appearing in the PAC-Bayes bound. Our analysis is sharp enough to lead to minimax rates of estimation over Sobolev smoothness classes.
APA
Tinsi, L. & Dalalyan, A.. (2022). Risk bounds for aggregated shallow neural networks using Gaussian priors. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:227-253 Available from https://proceedings.mlr.press/v178/tinsi22a.html.

Related Material