Bayesian Learning of Neural Network Architectures

Georgi Dikov, Justin Bayer
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:730-738, 2019.

Abstract

In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learned structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-dikov19a, title = {Bayesian Learning of Neural Network Architectures}, author = {Dikov, Georgi and Bayer, Justin}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {730--738}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/dikov19a/dikov19a.pdf}, url = {https://proceedings.mlr.press/v89/dikov19a.html}, abstract = {In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learned structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.} }
Endnote
%0 Conference Paper %T Bayesian Learning of Neural Network Architectures %A Georgi Dikov %A Justin Bayer %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-dikov19a %I PMLR %P 730--738 %U https://proceedings.mlr.press/v89/dikov19a.html %V 89 %X In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learned structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.
APA
Dikov, G. & Bayer, J.. (2019). Bayesian Learning of Neural Network Architectures. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:730-738 Available from https://proceedings.mlr.press/v89/dikov19a.html.

Related Material