Do Bayesian Neural Networks Actually Behave Like Bayesian Models?

Gábor Pituk, Vik Shirvaikar, Tom Rainforth
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:49420-49458, 2025.

Abstract

We empirically investigate how well popular approximate inference algorithms for Bayesian Neural Networks (BNNs) respect the theoretical properties of Bayesian belief updating. We find strong evidence on synthetic regression and real-world image classification tasks that common BNN algorithms such as variational inference, Laplace approximation, SWAG, and SGLD fail to update in a consistent manner, forget about old data under sequential updates, and violate the predictive coherence properties that would be expected of Bayesian methods. These observed behaviors imply that care should be taken when treating BNNs as true Bayesian models, particularly when using them beyond static prediction settings, such as for active, continual, or transfer learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-pituk25a, title = {Do {B}ayesian Neural Networks Actually Behave Like {B}ayesian Models?}, author = {Pituk, G\'{a}bor and Shirvaikar, Vik and Rainforth, Tom}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {49420--49458}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/pituk25a/pituk25a.pdf}, url = {https://proceedings.mlr.press/v267/pituk25a.html}, abstract = {We empirically investigate how well popular approximate inference algorithms for Bayesian Neural Networks (BNNs) respect the theoretical properties of Bayesian belief updating. We find strong evidence on synthetic regression and real-world image classification tasks that common BNN algorithms such as variational inference, Laplace approximation, SWAG, and SGLD fail to update in a consistent manner, forget about old data under sequential updates, and violate the predictive coherence properties that would be expected of Bayesian methods. These observed behaviors imply that care should be taken when treating BNNs as true Bayesian models, particularly when using them beyond static prediction settings, such as for active, continual, or transfer learning.} }
Endnote
%0 Conference Paper %T Do Bayesian Neural Networks Actually Behave Like Bayesian Models? %A Gábor Pituk %A Vik Shirvaikar %A Tom Rainforth %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-pituk25a %I PMLR %P 49420--49458 %U https://proceedings.mlr.press/v267/pituk25a.html %V 267 %X We empirically investigate how well popular approximate inference algorithms for Bayesian Neural Networks (BNNs) respect the theoretical properties of Bayesian belief updating. We find strong evidence on synthetic regression and real-world image classification tasks that common BNN algorithms such as variational inference, Laplace approximation, SWAG, and SGLD fail to update in a consistent manner, forget about old data under sequential updates, and violate the predictive coherence properties that would be expected of Bayesian methods. These observed behaviors imply that care should be taken when treating BNNs as true Bayesian models, particularly when using them beyond static prediction settings, such as for active, continual, or transfer learning.
APA
Pituk, G., Shirvaikar, V. & Rainforth, T.. (2025). Do Bayesian Neural Networks Actually Behave Like Bayesian Models?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:49420-49458 Available from https://proceedings.mlr.press/v267/pituk25a.html.

Related Material