Adaptive importance sampling for heavy-tailed distributions via $α$-divergence minimization

Thomas Guilmeau, Nicola Branchini, Emilie Chouzenoux, Victor Elvira
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3871-3879, 2024.

Abstract

Adaptive importance sampling (AIS) algorithms are widely used to approximate expectations with respect to complicated target probability distributions. When the target has heavy tails, existing AIS algorithms can provide inconsistent estimators or exhibit slow convergence, as they often neglect the target’s tail behaviour. To avoid this pitfall, we propose an AIS algorithm that approximates the target by Student-t proposal distributions. We adapt location and scale parameters by matching the escort moments - which are defined even for heavy-tailed distributions - of the target and proposal. These updates minimize the $\alpha$-divergence between the target and the proposal, thereby connecting with variational inference. We then show that the $\alpha$-divergence can be approximated by a generalized notion of effective sample size and leverage this new perspective to adapt the tail parameter with Bayesian optimization. We demonstrate the efficacy of our approach through applications to synthetic targets and a Bayesian Student-t regression task on a real example with clinical trial data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-guilmeau24a, title = {Adaptive importance sampling for heavy-tailed distributions via $α$-divergence minimization}, author = {Guilmeau, Thomas and Branchini, Nicola and Chouzenoux, Emilie and Elvira, Victor}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3871--3879}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/guilmeau24a/guilmeau24a.pdf}, url = {https://proceedings.mlr.press/v238/guilmeau24a.html}, abstract = {Adaptive importance sampling (AIS) algorithms are widely used to approximate expectations with respect to complicated target probability distributions. When the target has heavy tails, existing AIS algorithms can provide inconsistent estimators or exhibit slow convergence, as they often neglect the target’s tail behaviour. To avoid this pitfall, we propose an AIS algorithm that approximates the target by Student-t proposal distributions. We adapt location and scale parameters by matching the escort moments - which are defined even for heavy-tailed distributions - of the target and proposal. These updates minimize the $\alpha$-divergence between the target and the proposal, thereby connecting with variational inference. We then show that the $\alpha$-divergence can be approximated by a generalized notion of effective sample size and leverage this new perspective to adapt the tail parameter with Bayesian optimization. We demonstrate the efficacy of our approach through applications to synthetic targets and a Bayesian Student-t regression task on a real example with clinical trial data.} }
Endnote
%0 Conference Paper %T Adaptive importance sampling for heavy-tailed distributions via $α$-divergence minimization %A Thomas Guilmeau %A Nicola Branchini %A Emilie Chouzenoux %A Victor Elvira %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-guilmeau24a %I PMLR %P 3871--3879 %U https://proceedings.mlr.press/v238/guilmeau24a.html %V 238 %X Adaptive importance sampling (AIS) algorithms are widely used to approximate expectations with respect to complicated target probability distributions. When the target has heavy tails, existing AIS algorithms can provide inconsistent estimators or exhibit slow convergence, as they often neglect the target’s tail behaviour. To avoid this pitfall, we propose an AIS algorithm that approximates the target by Student-t proposal distributions. We adapt location and scale parameters by matching the escort moments - which are defined even for heavy-tailed distributions - of the target and proposal. These updates minimize the $\alpha$-divergence between the target and the proposal, thereby connecting with variational inference. We then show that the $\alpha$-divergence can be approximated by a generalized notion of effective sample size and leverage this new perspective to adapt the tail parameter with Bayesian optimization. We demonstrate the efficacy of our approach through applications to synthetic targets and a Bayesian Student-t regression task on a real example with clinical trial data.
APA
Guilmeau, T., Branchini, N., Chouzenoux, E. & Elvira, V.. (2024). Adaptive importance sampling for heavy-tailed distributions via $α$-divergence minimization. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3871-3879 Available from https://proceedings.mlr.press/v238/guilmeau24a.html.

Related Material