Hierarchical Variational Models

Rajesh Ranganath, Dustin Tran, David Blei
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:324-333, 2016.

Abstract

Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-ranganath16, title = {Hierarchical Variational Models}, author = {Ranganath, Rajesh and Tran, Dustin and Blei, David}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {324--333}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/ranganath16.pdf}, url = {https://proceedings.mlr.press/v48/ranganath16.html}, abstract = {Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.} }
Endnote
%0 Conference Paper %T Hierarchical Variational Models %A Rajesh Ranganath %A Dustin Tran %A David Blei %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-ranganath16 %I PMLR %P 324--333 %U https://proceedings.mlr.press/v48/ranganath16.html %V 48 %X Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.
RIS
TY - CPAPER TI - Hierarchical Variational Models AU - Rajesh Ranganath AU - Dustin Tran AU - David Blei BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-ranganath16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 324 EP - 333 L1 - http://proceedings.mlr.press/v48/ranganath16.pdf UR - https://proceedings.mlr.press/v48/ranganath16.html AB - Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior. ER -
APA
Ranganath, R., Tran, D. & Blei, D.. (2016). Hierarchical Variational Models. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:324-333 Available from https://proceedings.mlr.press/v48/ranganath16.html.

Related Material