Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference

Marvin Schmitt, Desi R. Ivanova, Daniel Habermann, Ullrich Koethe, Paul-Christian Bürkner, Stefan T. Radev
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43723-43741, 2024.

Abstract

We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data. In a nutshell, we invert Bayes’ theorem and estimate the marginal likelihood based on approximate representations of the joint model. Upon perfect approximation, the marginal likelihood is constant across all parameter values by definition. However, errors in approximate inference lead to undesirable variance in the marginal likelihood estimates across different parameter values. We penalize violations of this symmetry with a self-consistency loss which significantly improves the quality of approximate inference in low data regimes and can be used to augment the training of popular neural density estimators. We apply our method to a number of synthetic problems and realistic scientific models, discovering notable advantages in the context of both neural posterior and likelihood approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-schmitt24a, title = {Leveraging Self-Consistency for Data-Efficient Amortized {B}ayesian Inference}, author = {Schmitt, Marvin and Ivanova, Desi R. and Habermann, Daniel and Koethe, Ullrich and B\"{u}rkner, Paul-Christian and Radev, Stefan T.}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43723--43741}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/schmitt24a/schmitt24a.pdf}, url = {https://proceedings.mlr.press/v235/schmitt24a.html}, abstract = {We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data. In a nutshell, we invert Bayes’ theorem and estimate the marginal likelihood based on approximate representations of the joint model. Upon perfect approximation, the marginal likelihood is constant across all parameter values by definition. However, errors in approximate inference lead to undesirable variance in the marginal likelihood estimates across different parameter values. We penalize violations of this symmetry with a self-consistency loss which significantly improves the quality of approximate inference in low data regimes and can be used to augment the training of popular neural density estimators. We apply our method to a number of synthetic problems and realistic scientific models, discovering notable advantages in the context of both neural posterior and likelihood approximation.} }
Endnote
%0 Conference Paper %T Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference %A Marvin Schmitt %A Desi R. Ivanova %A Daniel Habermann %A Ullrich Koethe %A Paul-Christian Bürkner %A Stefan T. Radev %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-schmitt24a %I PMLR %P 43723--43741 %U https://proceedings.mlr.press/v235/schmitt24a.html %V 235 %X We propose a method to improve the efficiency and accuracy of amortized Bayesian inference by leveraging universal symmetries in the joint probabilistic model of parameters and data. In a nutshell, we invert Bayes’ theorem and estimate the marginal likelihood based on approximate representations of the joint model. Upon perfect approximation, the marginal likelihood is constant across all parameter values by definition. However, errors in approximate inference lead to undesirable variance in the marginal likelihood estimates across different parameter values. We penalize violations of this symmetry with a self-consistency loss which significantly improves the quality of approximate inference in low data regimes and can be used to augment the training of popular neural density estimators. We apply our method to a number of synthetic problems and realistic scientific models, discovering notable advantages in the context of both neural posterior and likelihood approximation.
APA
Schmitt, M., Ivanova, D.R., Habermann, D., Koethe, U., Bürkner, P. & Radev, S.T.. (2024). Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43723-43741 Available from https://proceedings.mlr.press/v235/schmitt24a.html.

Related Material