Improving Gradient-Guided Nested Sampling for Posterior Inference

Pablo Lemos, Nikolay Malkin, Will Handley, Yoshua Bengio, Yashar Hezaveh, Laurence Perreault-Levasseur
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:27230-27253, 2024.

Abstract

We present a performant, general-purpose gradient-guided nested sampling (GGNS) algorithm, combining the state of the art in differentiable programming, Hamiltonian slice sampling, clustering, mode separation, dynamic nested sampling, and parallelization. This unique combination allows GGNS to scale well with dimensionality and perform competitively on a variety of synthetic and real-world problems. We also show the potential of combining nested sampling with generative flow networks to obtain large amounts of high-quality samples from the posterior distribution. This combination leads to faster mode discovery and more accurate estimates of the partition function.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lemos24a, title = {Improving Gradient-Guided Nested Sampling for Posterior Inference}, author = {Lemos, Pablo and Malkin, Nikolay and Handley, Will and Bengio, Yoshua and Hezaveh, Yashar and Perreault-Levasseur, Laurence}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {27230--27253}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lemos24a/lemos24a.pdf}, url = {https://proceedings.mlr.press/v235/lemos24a.html}, abstract = {We present a performant, general-purpose gradient-guided nested sampling (GGNS) algorithm, combining the state of the art in differentiable programming, Hamiltonian slice sampling, clustering, mode separation, dynamic nested sampling, and parallelization. This unique combination allows GGNS to scale well with dimensionality and perform competitively on a variety of synthetic and real-world problems. We also show the potential of combining nested sampling with generative flow networks to obtain large amounts of high-quality samples from the posterior distribution. This combination leads to faster mode discovery and more accurate estimates of the partition function.} }
Endnote
%0 Conference Paper %T Improving Gradient-Guided Nested Sampling for Posterior Inference %A Pablo Lemos %A Nikolay Malkin %A Will Handley %A Yoshua Bengio %A Yashar Hezaveh %A Laurence Perreault-Levasseur %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lemos24a %I PMLR %P 27230--27253 %U https://proceedings.mlr.press/v235/lemos24a.html %V 235 %X We present a performant, general-purpose gradient-guided nested sampling (GGNS) algorithm, combining the state of the art in differentiable programming, Hamiltonian slice sampling, clustering, mode separation, dynamic nested sampling, and parallelization. This unique combination allows GGNS to scale well with dimensionality and perform competitively on a variety of synthetic and real-world problems. We also show the potential of combining nested sampling with generative flow networks to obtain large amounts of high-quality samples from the posterior distribution. This combination leads to faster mode discovery and more accurate estimates of the partition function.
APA
Lemos, P., Malkin, N., Handley, W., Bengio, Y., Hezaveh, Y. & Perreault-Levasseur, L.. (2024). Improving Gradient-Guided Nested Sampling for Posterior Inference. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:27230-27253 Available from https://proceedings.mlr.press/v235/lemos24a.html.

Related Material