Meta-Uncertainty in Bayesian Model Comparison

Marvin Schmitt, Stefan T. Radev, Paul-Christian Bürkner
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:11-29, 2023.

Abstract

Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models. In standard BMC, we construct a discrete probability distribution over the set of possible models, conditional on the observed data of interest. These posterior model probabilities (PMPs) are measures of uncertainty, but—when derived from a finite number of observations—are also uncertain themselves. In this paper, we conceptualize distinct levels of uncertainty which arise in BMC. We explore a fully probabilistic framework for quantifying meta-uncertainty, resulting in an applied method to enhance any BMC workflow. Drawing on both Bayesian and frequentist techniques, we represent the uncertainty over the uncertain PMPs via meta-models which combine simulated and observed data into a predictive distribution for PMPs on new data. We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-schmitt23a, title = {Meta-Uncertainty in Bayesian Model Comparison}, author = {Schmitt, Marvin and Radev, Stefan T. and B\"urkner, Paul-Christian}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {11--29}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/schmitt23a/schmitt23a.pdf}, url = {https://proceedings.mlr.press/v206/schmitt23a.html}, abstract = {Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models. In standard BMC, we construct a discrete probability distribution over the set of possible models, conditional on the observed data of interest. These posterior model probabilities (PMPs) are measures of uncertainty, but—when derived from a finite number of observations—are also uncertain themselves. In this paper, we conceptualize distinct levels of uncertainty which arise in BMC. We explore a fully probabilistic framework for quantifying meta-uncertainty, resulting in an applied method to enhance any BMC workflow. Drawing on both Bayesian and frequentist techniques, we represent the uncertainty over the uncertain PMPs via meta-models which combine simulated and observed data into a predictive distribution for PMPs on new data. We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.} }
Endnote
%0 Conference Paper %T Meta-Uncertainty in Bayesian Model Comparison %A Marvin Schmitt %A Stefan T. Radev %A Paul-Christian Bürkner %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-schmitt23a %I PMLR %P 11--29 %U https://proceedings.mlr.press/v206/schmitt23a.html %V 206 %X Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models. In standard BMC, we construct a discrete probability distribution over the set of possible models, conditional on the observed data of interest. These posterior model probabilities (PMPs) are measures of uncertainty, but—when derived from a finite number of observations—are also uncertain themselves. In this paper, we conceptualize distinct levels of uncertainty which arise in BMC. We explore a fully probabilistic framework for quantifying meta-uncertainty, resulting in an applied method to enhance any BMC workflow. Drawing on both Bayesian and frequentist techniques, we represent the uncertainty over the uncertain PMPs via meta-models which combine simulated and observed data into a predictive distribution for PMPs on new data. We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.
APA
Schmitt, M., Radev, S.T. & Bürkner, P.. (2023). Meta-Uncertainty in Bayesian Model Comparison. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:11-29 Available from https://proceedings.mlr.press/v206/schmitt23a.html.

Related Material