Bayesian neural network unit priors and generalized Weibull-tail property

Mariia Vladimirova, Julyan Arbel, Stéphane Girard
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:1397-1412, 2021.

Abstract

The connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years. Hidden units are proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they adapt their internal representations flexibly. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper, thanks to the introduced notion of generalized Weibull-tail. This finding sheds light on the behavior of hidden units of finite Bayesian neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v157-vladimirova21a, title = {{B}ayesian neural network unit priors and generalized {W}eibull-tail property}, author = {Vladimirova, Mariia and Arbel, Julyan and Girard, St\'ephane}, booktitle = {Proceedings of The 13th Asian Conference on Machine Learning}, pages = {1397--1412}, year = {2021}, editor = {Balasubramanian, Vineeth N. and Tsang, Ivor}, volume = {157}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v157/vladimirova21a/vladimirova21a.pdf}, url = {https://proceedings.mlr.press/v157/vladimirova21a.html}, abstract = {The connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years. Hidden units are proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they adapt their internal representations flexibly. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper, thanks to the introduced notion of generalized Weibull-tail. This finding sheds light on the behavior of hidden units of finite Bayesian neural networks. } }
Endnote
%0 Conference Paper %T Bayesian neural network unit priors and generalized Weibull-tail property %A Mariia Vladimirova %A Julyan Arbel %A Stéphane Girard %B Proceedings of The 13th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Vineeth N. Balasubramanian %E Ivor Tsang %F pmlr-v157-vladimirova21a %I PMLR %P 1397--1412 %U https://proceedings.mlr.press/v157/vladimirova21a.html %V 157 %X The connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years. Hidden units are proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they adapt their internal representations flexibly. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper, thanks to the introduced notion of generalized Weibull-tail. This finding sheds light on the behavior of hidden units of finite Bayesian neural networks.
APA
Vladimirova, M., Arbel, J. & Girard, S.. (2021). Bayesian neural network unit priors and generalized Weibull-tail property. Proceedings of The 13th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 157:1397-1412 Available from https://proceedings.mlr.press/v157/vladimirova21a.html.

Related Material