Implicitly Bayesian Prediction Rules in Deep Learning

Bruno Mlodozeniec, David Krueger, Richard Turner
Proceedings of the 6th Symposium on Advances in Approximate Bayesian Inference, PMLR 253:79-110, 2024.

Abstract

The Bayesian approach leads to coherent updates of predictions under new data, which makes adhering to Bayesian principles appealing in decision-making contexts. Traditionally, integrating Bayesian principles into models like deep neural networks involves setting priors on parameters and approximating posteriors. This is done despite the fact that, typically, priors on parameters reflect any prior beliefs only insofar as they dictate function space behaviour. In this paper, we rethink this approach and consider what properties characterise a prediction rule as being Bayesian. Algorithms meeting such criteria can be deemed implicitly Bayesian — they make the same predictions as some Bayesian model, without explicitly manifesting priors and posteriors. We argue this might be a more fruitful approach towards integrating Bayesian principles into deep learning. In this paper, we propose how to measure how close a general prediction rule is to being implicitly Bayesian, and empirically evaluate multiple prediction strategies using our approach. We also show theoretically that agents relying on non-implicitly Bayesian prediction rules can be easily exploited in adversarial betting settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v253-mlodozeniec24a, title = {Implicitly Bayesian Prediction Rules in Deep Learning}, author = {Mlodozeniec, Bruno and Krueger, David and Turner, Richard}, booktitle = {Proceedings of the 6th Symposium on Advances in Approximate Bayesian Inference}, pages = {79--110}, year = {2024}, editor = {Antorán, Javier and Naesseth, Christian A.}, volume = {253}, series = {Proceedings of Machine Learning Research}, month = {21 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v253/main/assets/mlodozeniec24a/mlodozeniec24a.pdf}, url = {https://proceedings.mlr.press/v253/mlodozeniec24a.html}, abstract = {The Bayesian approach leads to coherent updates of predictions under new data, which makes adhering to Bayesian principles appealing in decision-making contexts. Traditionally, integrating Bayesian principles into models like deep neural networks involves setting priors on parameters and approximating posteriors. This is done despite the fact that, typically, priors on parameters reflect any prior beliefs only insofar as they dictate function space behaviour. In this paper, we rethink this approach and consider what properties characterise a prediction rule as being Bayesian. Algorithms meeting such criteria can be deemed implicitly Bayesian — they make the same predictions as some Bayesian model, without explicitly manifesting priors and posteriors. We argue this might be a more fruitful approach towards integrating Bayesian principles into deep learning. In this paper, we propose how to measure how close a general prediction rule is to being implicitly Bayesian, and empirically evaluate multiple prediction strategies using our approach. We also show theoretically that agents relying on non-implicitly Bayesian prediction rules can be easily exploited in adversarial betting settings.} }
Endnote
%0 Conference Paper %T Implicitly Bayesian Prediction Rules in Deep Learning %A Bruno Mlodozeniec %A David Krueger %A Richard Turner %B Proceedings of the 6th Symposium on Advances in Approximate Bayesian Inference %C Proceedings of Machine Learning Research %D 2024 %E Javier Antorán %E Christian A. Naesseth %F pmlr-v253-mlodozeniec24a %I PMLR %P 79--110 %U https://proceedings.mlr.press/v253/mlodozeniec24a.html %V 253 %X The Bayesian approach leads to coherent updates of predictions under new data, which makes adhering to Bayesian principles appealing in decision-making contexts. Traditionally, integrating Bayesian principles into models like deep neural networks involves setting priors on parameters and approximating posteriors. This is done despite the fact that, typically, priors on parameters reflect any prior beliefs only insofar as they dictate function space behaviour. In this paper, we rethink this approach and consider what properties characterise a prediction rule as being Bayesian. Algorithms meeting such criteria can be deemed implicitly Bayesian — they make the same predictions as some Bayesian model, without explicitly manifesting priors and posteriors. We argue this might be a more fruitful approach towards integrating Bayesian principles into deep learning. In this paper, we propose how to measure how close a general prediction rule is to being implicitly Bayesian, and empirically evaluate multiple prediction strategies using our approach. We also show theoretically that agents relying on non-implicitly Bayesian prediction rules can be easily exploited in adversarial betting settings.
APA
Mlodozeniec, B., Krueger, D. & Turner, R.. (2024). Implicitly Bayesian Prediction Rules in Deep Learning. Proceedings of the 6th Symposium on Advances in Approximate Bayesian Inference, in Proceedings of Machine Learning Research 253:79-110 Available from https://proceedings.mlr.press/v253/mlodozeniec24a.html.

Related Material