Well-Defined Function-Space Variational Inference in Bayesian Neural Networks via Regularized KL-Divergence

Tristan Cinquin, Robert Bamler
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:752-776, 2025.

Abstract

Bayesian neural networks (BNN) promise to combine the predictive performance of neural networks with principled uncertainty modeling crucial for safety-critical systems and decision making. However, posterior uncertainties depend on the choice of prior, and finding informative priors in weight-space has proven difficult. This has motivated variational inference (VI) methods that pose priors directly on the function represented by the BNN rather than on weights. In this paper, we address a fundamental issue with such function-space VI approaches pointed out by Burt et al. (2020), who showed that the objective function (ELBO) is negative infinite for most priors of interest. Our solution builds on generalized VI with the regularized KL divergence and is, to the best of our knowledge, the first well-defined variational objective for inference in BNNs with Gaussian process (GP) priors. Experiments show that our method successfully incorporates the properties specified by the GP prior, and that it provides competitive uncertainty estimates for regression, classification and out-of-distribution detection compared to BNN baselines with both function and weight-space priors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-cinquin25a, title = {Well-Defined Function-Space Variational Inference in Bayesian Neural Networks via Regularized KL-Divergence}, author = {Cinquin, Tristan and Bamler, Robert}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {752--776}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/cinquin25a/cinquin25a.pdf}, url = {https://proceedings.mlr.press/v286/cinquin25a.html}, abstract = {Bayesian neural networks (BNN) promise to combine the predictive performance of neural networks with principled uncertainty modeling crucial for safety-critical systems and decision making. However, posterior uncertainties depend on the choice of prior, and finding informative priors in weight-space has proven difficult. This has motivated variational inference (VI) methods that pose priors directly on the function represented by the BNN rather than on weights. In this paper, we address a fundamental issue with such function-space VI approaches pointed out by Burt et al. (2020), who showed that the objective function (ELBO) is negative infinite for most priors of interest. Our solution builds on generalized VI with the regularized KL divergence and is, to the best of our knowledge, the first well-defined variational objective for inference in BNNs with Gaussian process (GP) priors. Experiments show that our method successfully incorporates the properties specified by the GP prior, and that it provides competitive uncertainty estimates for regression, classification and out-of-distribution detection compared to BNN baselines with both function and weight-space priors.} }
Endnote
%0 Conference Paper %T Well-Defined Function-Space Variational Inference in Bayesian Neural Networks via Regularized KL-Divergence %A Tristan Cinquin %A Robert Bamler %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-cinquin25a %I PMLR %P 752--776 %U https://proceedings.mlr.press/v286/cinquin25a.html %V 286 %X Bayesian neural networks (BNN) promise to combine the predictive performance of neural networks with principled uncertainty modeling crucial for safety-critical systems and decision making. However, posterior uncertainties depend on the choice of prior, and finding informative priors in weight-space has proven difficult. This has motivated variational inference (VI) methods that pose priors directly on the function represented by the BNN rather than on weights. In this paper, we address a fundamental issue with such function-space VI approaches pointed out by Burt et al. (2020), who showed that the objective function (ELBO) is negative infinite for most priors of interest. Our solution builds on generalized VI with the regularized KL divergence and is, to the best of our knowledge, the first well-defined variational objective for inference in BNNs with Gaussian process (GP) priors. Experiments show that our method successfully incorporates the properties specified by the GP prior, and that it provides competitive uncertainty estimates for regression, classification and out-of-distribution detection compared to BNN baselines with both function and weight-space priors.
APA
Cinquin, T. & Bamler, R.. (2025). Well-Defined Function-Space Variational Inference in Bayesian Neural Networks via Regularized KL-Divergence. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:752-776 Available from https://proceedings.mlr.press/v286/cinquin25a.html.

Related Material