Rapid Convergence of Informed Importance Tempering

Quan Zhou, Aaron Smith
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:10939-10965, 2022.

Abstract

Informed Markov chain Monte Carlo (MCMC) methods have been proposed as scalable solutions to Bayesian posterior computation on high-dimensional discrete state spaces, but theoretical results about their convergence behavior in general settings are lacking. In this article, we propose a class of MCMC schemes called informed importance tempering (IIT), which combine importance sampling and informed local proposals, and derive generally applicable spectral gap bounds for IIT estimators. Our theory shows that IIT samplers have remarkable scalability when the target posterior distribution concentrates on a small set. Further, both our theory and numerical experiments demonstrate that the informed proposal should be chosen with caution: the performance may be very sensitive to the shape of the target distribution. We find that the “square-root proposal weighting” scheme tends to perform well in most settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-zhou22e, title = { Rapid Convergence of Informed Importance Tempering }, author = {Zhou, Quan and Smith, Aaron}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {10939--10965}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/zhou22e/zhou22e.pdf}, url = {https://proceedings.mlr.press/v151/zhou22e.html}, abstract = { Informed Markov chain Monte Carlo (MCMC) methods have been proposed as scalable solutions to Bayesian posterior computation on high-dimensional discrete state spaces, but theoretical results about their convergence behavior in general settings are lacking. In this article, we propose a class of MCMC schemes called informed importance tempering (IIT), which combine importance sampling and informed local proposals, and derive generally applicable spectral gap bounds for IIT estimators. Our theory shows that IIT samplers have remarkable scalability when the target posterior distribution concentrates on a small set. Further, both our theory and numerical experiments demonstrate that the informed proposal should be chosen with caution: the performance may be very sensitive to the shape of the target distribution. We find that the “square-root proposal weighting” scheme tends to perform well in most settings. } }
Endnote
%0 Conference Paper %T Rapid Convergence of Informed Importance Tempering %A Quan Zhou %A Aaron Smith %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-zhou22e %I PMLR %P 10939--10965 %U https://proceedings.mlr.press/v151/zhou22e.html %V 151 %X Informed Markov chain Monte Carlo (MCMC) methods have been proposed as scalable solutions to Bayesian posterior computation on high-dimensional discrete state spaces, but theoretical results about their convergence behavior in general settings are lacking. In this article, we propose a class of MCMC schemes called informed importance tempering (IIT), which combine importance sampling and informed local proposals, and derive generally applicable spectral gap bounds for IIT estimators. Our theory shows that IIT samplers have remarkable scalability when the target posterior distribution concentrates on a small set. Further, both our theory and numerical experiments demonstrate that the informed proposal should be chosen with caution: the performance may be very sensitive to the shape of the target distribution. We find that the “square-root proposal weighting” scheme tends to perform well in most settings.
APA
Zhou, Q. & Smith, A.. (2022). Rapid Convergence of Informed Importance Tempering . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:10939-10965 Available from https://proceedings.mlr.press/v151/zhou22e.html.

Related Material