Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation

Manuel Haußmann, Fred A. Hamprecht, Melih Kandemir
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:563-573, 2020.

Abstract

We propose a new Bayesian Neural Net formulation that affords variational inference for which the evidence lower bound is analytically tractable subject to a tight approximation. We achieve this tractability by (i) decomposing ReLU nonlinearities into the product of an identity and a Heaviside step function, (ii) introducing a separate path that decomposes the neural net expectation from its variance. We demonstrate formally that introducing separate latent binary variables to the activations allows representing the neural network likelihood as a chain of linear operations. Performing variational inference on this construction enables a sampling-free computation of the evidence lower bound which is a more effective approximation than the widely applied Monte Carlo sampling and CLT related techniques. We evaluate the model on a range of regression and classification tasks against BNN inference alternatives, showing competitive or improved performance over the current state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-haussmann20a, title = {Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation}, author = {Hau{\ss}mann, Manuel and Hamprecht, Fred A. and Kandemir, Melih}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {563--573}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/haussmann20a/haussmann20a.pdf}, url = {https://proceedings.mlr.press/v115/haussmann20a.html}, abstract = { We propose a new Bayesian Neural Net formulation that affords variational inference for which the evidence lower bound is analytically tractable subject to a tight approximation. We achieve this tractability by (i) decomposing ReLU nonlinearities into the product of an identity and a Heaviside step function, (ii) introducing a separate path that decomposes the neural net expectation from its variance. We demonstrate formally that introducing separate latent binary variables to the activations allows representing the neural network likelihood as a chain of linear operations. Performing variational inference on this construction enables a sampling-free computation of the evidence lower bound which is a more effective approximation than the widely applied Monte Carlo sampling and CLT related techniques. We evaluate the model on a range of regression and classification tasks against BNN inference alternatives, showing competitive or improved performance over the current state-of-the-art. } }
Endnote
%0 Conference Paper %T Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation %A Manuel Haußmann %A Fred A. Hamprecht %A Melih Kandemir %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-haussmann20a %I PMLR %P 563--573 %U https://proceedings.mlr.press/v115/haussmann20a.html %V 115 %X We propose a new Bayesian Neural Net formulation that affords variational inference for which the evidence lower bound is analytically tractable subject to a tight approximation. We achieve this tractability by (i) decomposing ReLU nonlinearities into the product of an identity and a Heaviside step function, (ii) introducing a separate path that decomposes the neural net expectation from its variance. We demonstrate formally that introducing separate latent binary variables to the activations allows representing the neural network likelihood as a chain of linear operations. Performing variational inference on this construction enables a sampling-free computation of the evidence lower bound which is a more effective approximation than the widely applied Monte Carlo sampling and CLT related techniques. We evaluate the model on a range of regression and classification tasks against BNN inference alternatives, showing competitive or improved performance over the current state-of-the-art.
APA
Haußmann, M., Hamprecht, F.A. & Kandemir, M.. (2020). Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:563-573 Available from https://proceedings.mlr.press/v115/haussmann20a.html.

Related Material