Variational Learning of Fractional Posteriors

Kian Ming A. Chai, Edwin V. Bonilla
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:7058-7088, 2025.

Abstract

We introduce a novel one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors. We extend this framework to hierarchical construction and Bayes posteriors, offering a versatile tool for probabilistic modelling. We demonstrate two cases where gradients can be obtained analytically and a simulation study on mixture models showing that our fractional posteriors can be used to achieve better calibration compared to posteriors from the conventional variational bound. When applied to variational autoencoders (VAEs), our approach attains higher evidence bounds and enables learning of high-performing approximate Bayes posteriors jointly with fractional posteriors. We show that VAEs trained with fractional posteriors produce decoders that are better aligned for generation from the prior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chai25a, title = {Variational Learning of Fractional Posteriors}, author = {Chai, Kian Ming A. and Bonilla, Edwin V.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {7058--7088}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chai25a/chai25a.pdf}, url = {https://proceedings.mlr.press/v267/chai25a.html}, abstract = {We introduce a novel one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors. We extend this framework to hierarchical construction and Bayes posteriors, offering a versatile tool for probabilistic modelling. We demonstrate two cases where gradients can be obtained analytically and a simulation study on mixture models showing that our fractional posteriors can be used to achieve better calibration compared to posteriors from the conventional variational bound. When applied to variational autoencoders (VAEs), our approach attains higher evidence bounds and enables learning of high-performing approximate Bayes posteriors jointly with fractional posteriors. We show that VAEs trained with fractional posteriors produce decoders that are better aligned for generation from the prior.} }
Endnote
%0 Conference Paper %T Variational Learning of Fractional Posteriors %A Kian Ming A. Chai %A Edwin V. Bonilla %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chai25a %I PMLR %P 7058--7088 %U https://proceedings.mlr.press/v267/chai25a.html %V 267 %X We introduce a novel one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors. We extend this framework to hierarchical construction and Bayes posteriors, offering a versatile tool for probabilistic modelling. We demonstrate two cases where gradients can be obtained analytically and a simulation study on mixture models showing that our fractional posteriors can be used to achieve better calibration compared to posteriors from the conventional variational bound. When applied to variational autoencoders (VAEs), our approach attains higher evidence bounds and enables learning of high-performing approximate Bayes posteriors jointly with fractional posteriors. We show that VAEs trained with fractional posteriors produce decoders that are better aligned for generation from the prior.
APA
Chai, K.M.A. & Bonilla, E.V.. (2025). Variational Learning of Fractional Posteriors. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:7058-7088 Available from https://proceedings.mlr.press/v267/chai25a.html.

Related Material