Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors

Soumya Ghosh, Jiayu Yao, Finale Doshi-Velez
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1744-1753, 2018.

Abstract

Bayesian Neural Networks (BNNs) have recently received increasing attention for their ability to provide well-calibrated posterior uncertainties. However, model selection—even choosing the number of nodes—remains an open question. Recent work has proposed the use of a horseshoe prior over node pre-activations of a Bayesian neural network, which effectively turns off nodes that do not help explain the data. In this work, we propose several modeling and inference advances that consistently improve the compactness of the model learned while maintaining predictive performance, especially in smaller-sample settings including reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-ghosh18a, title = {Structured Variational Learning of {B}ayesian Neural Networks with Horseshoe Priors}, author = {Ghosh, Soumya and Yao, Jiayu and Doshi-Velez, Finale}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1744--1753}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/ghosh18a/ghosh18a.pdf}, url = {http://proceedings.mlr.press/v80/ghosh18a.html}, abstract = {Bayesian Neural Networks (BNNs) have recently received increasing attention for their ability to provide well-calibrated posterior uncertainties. However, model selection—even choosing the number of nodes—remains an open question. Recent work has proposed the use of a horseshoe prior over node pre-activations of a Bayesian neural network, which effectively turns off nodes that do not help explain the data. In this work, we propose several modeling and inference advances that consistently improve the compactness of the model learned while maintaining predictive performance, especially in smaller-sample settings including reinforcement learning.} }
Endnote
%0 Conference Paper %T Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors %A Soumya Ghosh %A Jiayu Yao %A Finale Doshi-Velez %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-ghosh18a %I PMLR %P 1744--1753 %U http://proceedings.mlr.press/v80/ghosh18a.html %V 80 %X Bayesian Neural Networks (BNNs) have recently received increasing attention for their ability to provide well-calibrated posterior uncertainties. However, model selection—even choosing the number of nodes—remains an open question. Recent work has proposed the use of a horseshoe prior over node pre-activations of a Bayesian neural network, which effectively turns off nodes that do not help explain the data. In this work, we propose several modeling and inference advances that consistently improve the compactness of the model learned while maintaining predictive performance, especially in smaller-sample settings including reinforcement learning.
APA
Ghosh, S., Yao, J. & Doshi-Velez, F.. (2018). Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1744-1753 Available from http://proceedings.mlr.press/v80/ghosh18a.html.

Related Material