Deep Generative Quantile Bayes

Jungeum Kim, Percy S. Zhai, Veronika Rockova
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4141-4149, 2025.

Abstract

We develop a multivariate posterior sampling procedure through deep generative quantile learning. Simulation proceeds implicitly through a push-forward mapping that can transform i.i.d. random vectors samples from the posterior. We utilize Monge-Kantorovich depth in multivariate quantiles to directly sample from Bayesian credible sets, a unique feature not offered by typical posterior sampling methods. To enhance training of the quantile mapping, we design a neural network that automatically performs summary statistic extraction. This additional neural network structure has performance benefits including support shrinkage (i.e. contraction of our posterior approximation) as the observation sample size increases. We demonstrate the usefulness of our approach on several examples where the absence of likelihood renders classical MCMC infeasible. Finally, we provide the following frequentist theoretical justifications for our quantile learning framework: consistency of the estimated vector quantile, of the recovered posterior distribution, and of the corresponding Bayesian credible sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-kim25d, title = {Deep Generative Quantile Bayes}, author = {Kim, Jungeum and Zhai, Percy S. and Rockova, Veronika}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4141--4149}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/kim25d/kim25d.pdf}, url = {https://proceedings.mlr.press/v258/kim25d.html}, abstract = {We develop a multivariate posterior sampling procedure through deep generative quantile learning. Simulation proceeds implicitly through a push-forward mapping that can transform i.i.d. random vectors samples from the posterior. We utilize Monge-Kantorovich depth in multivariate quantiles to directly sample from Bayesian credible sets, a unique feature not offered by typical posterior sampling methods. To enhance training of the quantile mapping, we design a neural network that automatically performs summary statistic extraction. This additional neural network structure has performance benefits including support shrinkage (i.e. contraction of our posterior approximation) as the observation sample size increases. We demonstrate the usefulness of our approach on several examples where the absence of likelihood renders classical MCMC infeasible. Finally, we provide the following frequentist theoretical justifications for our quantile learning framework: consistency of the estimated vector quantile, of the recovered posterior distribution, and of the corresponding Bayesian credible sets.} }
Endnote
%0 Conference Paper %T Deep Generative Quantile Bayes %A Jungeum Kim %A Percy S. Zhai %A Veronika Rockova %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-kim25d %I PMLR %P 4141--4149 %U https://proceedings.mlr.press/v258/kim25d.html %V 258 %X We develop a multivariate posterior sampling procedure through deep generative quantile learning. Simulation proceeds implicitly through a push-forward mapping that can transform i.i.d. random vectors samples from the posterior. We utilize Monge-Kantorovich depth in multivariate quantiles to directly sample from Bayesian credible sets, a unique feature not offered by typical posterior sampling methods. To enhance training of the quantile mapping, we design a neural network that automatically performs summary statistic extraction. This additional neural network structure has performance benefits including support shrinkage (i.e. contraction of our posterior approximation) as the observation sample size increases. We demonstrate the usefulness of our approach on several examples where the absence of likelihood renders classical MCMC infeasible. Finally, we provide the following frequentist theoretical justifications for our quantile learning framework: consistency of the estimated vector quantile, of the recovered posterior distribution, and of the corresponding Bayesian credible sets.
APA
Kim, J., Zhai, P.S. & Rockova, V.. (2025). Deep Generative Quantile Bayes. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4141-4149 Available from https://proceedings.mlr.press/v258/kim25d.html.

Related Material