A minimax near-optimal algorithm for adaptive rejection sampling

Juliette Achddou, Joseph Lam-Weil, Alexandra Carpentier, Gilles Blanchard
; Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:94-126, 2019.

Abstract

Rejection Sampling is a fundamental Monte-Carlo method. It is used to sample from distributions admitting a probability density function which can be evaluated exactly at any given point, albeit at a high computational cost. However, without proper tuning, this technique implies a high rejection rate. Several methods have been explored to cope with this problem, based on the principle of adaptively estimating the density by a simpler function, using the information of the previous samples. Most of them either rely on strong assumptions on the form of the density, or do not offer any theoretical performance guarantee. We give the first theoretical lower bound for the problem of adaptive rejection sampling and introduce a new algorithm which guarantees a near-optimal rejection rate in a minimax sense.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-achddou19a, title = {A minimax near-optimal algorithm for adaptive rejection sampling}, author = {Achddou, Juliette and Lam-Weil, Joseph and Carpentier, Alexandra and Blanchard, Gilles}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {94--126}, year = {2019}, editor = {Aurélien Garivier and Satyen Kale}, volume = {98}, series = {Proceedings of Machine Learning Research}, address = {Chicago, Illinois}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/achddou19a/achddou19a.pdf}, url = {http://proceedings.mlr.press/v98/achddou19a.html}, abstract = {Rejection Sampling is a fundamental Monte-Carlo method. It is used to sample from distributions admitting a probability density function which can be evaluated exactly at any given point, albeit at a high computational cost. However, without proper tuning, this technique implies a high rejection rate. Several methods have been explored to cope with this problem, based on the principle of adaptively estimating the density by a simpler function, using the information of the previous samples. Most of them either rely on strong assumptions on the form of the density, or do not offer any theoretical performance guarantee. We give the first theoretical lower bound for the problem of adaptive rejection sampling and introduce a new algorithm which guarantees a near-optimal rejection rate in a minimax sense.} }
Endnote
%0 Conference Paper %T A minimax near-optimal algorithm for adaptive rejection sampling %A Juliette Achddou %A Joseph Lam-Weil %A Alexandra Carpentier %A Gilles Blanchard %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-achddou19a %I PMLR %J Proceedings of Machine Learning Research %P 94--126 %U http://proceedings.mlr.press %V 98 %W PMLR %X Rejection Sampling is a fundamental Monte-Carlo method. It is used to sample from distributions admitting a probability density function which can be evaluated exactly at any given point, albeit at a high computational cost. However, without proper tuning, this technique implies a high rejection rate. Several methods have been explored to cope with this problem, based on the principle of adaptively estimating the density by a simpler function, using the information of the previous samples. Most of them either rely on strong assumptions on the form of the density, or do not offer any theoretical performance guarantee. We give the first theoretical lower bound for the problem of adaptive rejection sampling and introduce a new algorithm which guarantees a near-optimal rejection rate in a minimax sense.
APA
Achddou, J., Lam-Weil, J., Carpentier, A. & Blanchard, G.. (2019). A minimax near-optimal algorithm for adaptive rejection sampling. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in PMLR 98:94-126

Related Material