Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling

Denis Blessing, Xiaogang Jia, Johannes Esslinger, Francisco Vargas, Gerhard Neumann
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:4205-4229, 2024.

Abstract

Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in sampling from intractable probability distributions. However, current studies lack a unified evaluation framework, relying on disparate performance measures and limited method comparisons across diverse tasks, complicating the assessment of progress and hindering the decision-making of practitioners. In response to these challenges, our work introduces a benchmark that evaluates sampling methods using a standardized task suite and a broad range of performance criteria. Moreover, we study existing metrics for quantifying mode collapse and introduce novel metrics for this purpose. Our findings provide insights into strengths and weaknesses of existing sampling methods, serving as a valuable reference for future developments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-blessing24a, title = {Beyond {ELBO}s: A Large-Scale Evaluation of Variational Methods for Sampling}, author = {Blessing, Denis and Jia, Xiaogang and Esslinger, Johannes and Vargas, Francisco and Neumann, Gerhard}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {4205--4229}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/blessing24a/blessing24a.pdf}, url = {https://proceedings.mlr.press/v235/blessing24a.html}, abstract = {Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in sampling from intractable probability distributions. However, current studies lack a unified evaluation framework, relying on disparate performance measures and limited method comparisons across diverse tasks, complicating the assessment of progress and hindering the decision-making of practitioners. In response to these challenges, our work introduces a benchmark that evaluates sampling methods using a standardized task suite and a broad range of performance criteria. Moreover, we study existing metrics for quantifying mode collapse and introduce novel metrics for this purpose. Our findings provide insights into strengths and weaknesses of existing sampling methods, serving as a valuable reference for future developments.} }
Endnote
%0 Conference Paper %T Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling %A Denis Blessing %A Xiaogang Jia %A Johannes Esslinger %A Francisco Vargas %A Gerhard Neumann %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-blessing24a %I PMLR %P 4205--4229 %U https://proceedings.mlr.press/v235/blessing24a.html %V 235 %X Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in sampling from intractable probability distributions. However, current studies lack a unified evaluation framework, relying on disparate performance measures and limited method comparisons across diverse tasks, complicating the assessment of progress and hindering the decision-making of practitioners. In response to these challenges, our work introduces a benchmark that evaluates sampling methods using a standardized task suite and a broad range of performance criteria. Moreover, we study existing metrics for quantifying mode collapse and introduce novel metrics for this purpose. Our findings provide insights into strengths and weaknesses of existing sampling methods, serving as a valuable reference for future developments.
APA
Blessing, D., Jia, X., Esslinger, J., Vargas, F. & Neumann, G.. (2024). Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:4205-4229 Available from https://proceedings.mlr.press/v235/blessing24a.html.

Related Material