Black Box Variational Inference

Rajesh Ranganath, Sean Gerrish, David Blei
; Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:814-822, 2014.

Abstract

Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis. These efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a “black box” variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-ranganath14, title = {{Black Box Variational Inference}}, author = {Rajesh Ranganath and Sean Gerrish and David Blei}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {814--822}, year = {2014}, editor = {Samuel Kaski and Jukka Corander}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/ranganath14.pdf}, url = {http://proceedings.mlr.press/v33/ranganath14.html}, abstract = {Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis. These efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a “black box” variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.} }
Endnote
%0 Conference Paper %T Black Box Variational Inference %A Rajesh Ranganath %A Sean Gerrish %A David Blei %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-ranganath14 %I PMLR %J Proceedings of Machine Learning Research %P 814--822 %U http://proceedings.mlr.press %V 33 %W PMLR %X Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis. These efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a “black box” variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.
RIS
TY - CPAPER TI - Black Box Variational Inference AU - Rajesh Ranganath AU - Sean Gerrish AU - David Blei BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics PY - 2014/04/02 DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-ranganath14 PB - PMLR SP - 814 DP - PMLR EP - 822 L1 - http://proceedings.mlr.press/v33/ranganath14.pdf UR - http://proceedings.mlr.press/v33/ranganath14.html AB - Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis. These efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a “black box” variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data. ER -
APA
Ranganath, R., Gerrish, S. & Blei, D.. (2014). Black Box Variational Inference. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in PMLR 33:814-822

Related Material