[edit]
Variational Rejection Sampling
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:823-832, 2018.
Abstract
Learning latent variable models with stochastic variational inference is challenging when the approximate posterior is far from the true posterior, due to high variance in the gradient estimates. We propose a novel rejection sampling step that discards samples from the variational posterior which are assigned low likelihoods by the model. Our approach provides an arbitrarily accurate approximation of the true posterior at the expense of extra computation. Using a new gradient estimator for the resulting unnormalized proposal distribution, we achieve average improvements of 3.71 nats and 0.31 nats over state-of-the-art single-sample and multi-sample alternatives respectively for estimating marginal log-likelihoods using sigmoid belief networks on the MNIST dataset. We show both theoretically and empirically how explicitly rejecting samples, while seemingly challenging to analyze due to the implicit nature of the resulting unnormalized proposal distribution, can have benefits over the competing state-of-the-art alternatives based on importance weighting. We demonstrate the effectiveness of the proposed approach via experiments on synthetic data and a benchmark density estimation task with sigmoid belief networks over the MNIST dataset.