Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms

Christian Naesseth, Francisco Ruiz, Scott Linderman, David Blei
; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:489-498, 2017.

Abstract

Variational inference using the reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intractable expectations. The reparameterization trick is applicable when we can simulate a random variable by applying a differentiable deterministic function on an auxiliary random variable whose distribution is fixed. For many distributions of interest (such as the gamma or Dirichlet), simulation of random variables relies on acceptance-rejection sampling. The discontinuity introduced by the accept-reject step means that standard reparameterization tricks are not applicable. We propose a new method that lets us leverage reparameterization gradients even when variables are outputs of a acceptance-rejection sampling algorithm. Our approach enables reparameterization on a larger class of variational distributions. In several studies of real and synthetic data, we show that the variance of the estimator of the gradient is significantly lower than other state-of-the-art methods. This leads to faster convergence of stochastic gradient variational inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-naesseth17a, title = {{Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms}}, author = {Christian Naesseth and Francisco Ruiz and Scott Linderman and David Blei}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {489--498}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/naesseth17a/naesseth17a.pdf}, url = {http://proceedings.mlr.press/v54/naesseth17a.html}, abstract = {Variational inference using the reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intractable expectations. The reparameterization trick is applicable when we can simulate a random variable by applying a differentiable deterministic function on an auxiliary random variable whose distribution is fixed. For many distributions of interest (such as the gamma or Dirichlet), simulation of random variables relies on acceptance-rejection sampling. The discontinuity introduced by the accept-reject step means that standard reparameterization tricks are not applicable. We propose a new method that lets us leverage reparameterization gradients even when variables are outputs of a acceptance-rejection sampling algorithm. Our approach enables reparameterization on a larger class of variational distributions. In several studies of real and synthetic data, we show that the variance of the estimator of the gradient is significantly lower than other state-of-the-art methods. This leads to faster convergence of stochastic gradient variational inference.} }
Endnote
%0 Conference Paper %T Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms %A Christian Naesseth %A Francisco Ruiz %A Scott Linderman %A David Blei %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-naesseth17a %I PMLR %J Proceedings of Machine Learning Research %P 489--498 %U http://proceedings.mlr.press %V 54 %W PMLR %X Variational inference using the reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intractable expectations. The reparameterization trick is applicable when we can simulate a random variable by applying a differentiable deterministic function on an auxiliary random variable whose distribution is fixed. For many distributions of interest (such as the gamma or Dirichlet), simulation of random variables relies on acceptance-rejection sampling. The discontinuity introduced by the accept-reject step means that standard reparameterization tricks are not applicable. We propose a new method that lets us leverage reparameterization gradients even when variables are outputs of a acceptance-rejection sampling algorithm. Our approach enables reparameterization on a larger class of variational distributions. In several studies of real and synthetic data, we show that the variance of the estimator of the gradient is significantly lower than other state-of-the-art methods. This leads to faster convergence of stochastic gradient variational inference.
APA
Naesseth, C., Ruiz, F., Linderman, S. & Blei, D.. (2017). Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:489-498

Related Material