Bidirectional Helmholtz Machines

Jorg Bornschein, Samira Shabanian, Asja Fischer, Yoshua Bengio
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2511-2519, 2016.

Abstract

Efficient unsupervised training and inference in deep generative models remains a challenging problem. One basic approach, called Helmholtz machine or Variational Autoencoder, involves training a top-down directed generative model together with a bottom-up auxiliary model used for approximate inference. Recent results indicate that better generative models can be obtained with better approximate inference procedures. Instead of improving the inference procedure, we here propose a new model, the bidirectional Helmholtz machine, which guarantees that the top-down and bottom-up distributions can efficiently invert each other. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the model distribution to be the geometric mean of these two. We present a lower-bound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottom-up and top-down approximate distributions is minimized. This approach results in state of the art generative models which prefer significantly deeper architectures while it allows for orders of magnitude more efficient likelihood estimation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-bornschein16, title = {Bidirectional Helmholtz Machines}, author = {Bornschein, Jorg and Shabanian, Samira and Fischer, Asja and Bengio, Yoshua}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2511--2519}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/bornschein16.pdf}, url = {https://proceedings.mlr.press/v48/bornschein16.html}, abstract = {Efficient unsupervised training and inference in deep generative models remains a challenging problem. One basic approach, called Helmholtz machine or Variational Autoencoder, involves training a top-down directed generative model together with a bottom-up auxiliary model used for approximate inference. Recent results indicate that better generative models can be obtained with better approximate inference procedures. Instead of improving the inference procedure, we here propose a new model, the bidirectional Helmholtz machine, which guarantees that the top-down and bottom-up distributions can efficiently invert each other. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the model distribution to be the geometric mean of these two. We present a lower-bound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottom-up and top-down approximate distributions is minimized. This approach results in state of the art generative models which prefer significantly deeper architectures while it allows for orders of magnitude more efficient likelihood estimation.} }
Endnote
%0 Conference Paper %T Bidirectional Helmholtz Machines %A Jorg Bornschein %A Samira Shabanian %A Asja Fischer %A Yoshua Bengio %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-bornschein16 %I PMLR %P 2511--2519 %U https://proceedings.mlr.press/v48/bornschein16.html %V 48 %X Efficient unsupervised training and inference in deep generative models remains a challenging problem. One basic approach, called Helmholtz machine or Variational Autoencoder, involves training a top-down directed generative model together with a bottom-up auxiliary model used for approximate inference. Recent results indicate that better generative models can be obtained with better approximate inference procedures. Instead of improving the inference procedure, we here propose a new model, the bidirectional Helmholtz machine, which guarantees that the top-down and bottom-up distributions can efficiently invert each other. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the model distribution to be the geometric mean of these two. We present a lower-bound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottom-up and top-down approximate distributions is minimized. This approach results in state of the art generative models which prefer significantly deeper architectures while it allows for orders of magnitude more efficient likelihood estimation.
RIS
TY - CPAPER TI - Bidirectional Helmholtz Machines AU - Jorg Bornschein AU - Samira Shabanian AU - Asja Fischer AU - Yoshua Bengio BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-bornschein16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2511 EP - 2519 L1 - http://proceedings.mlr.press/v48/bornschein16.pdf UR - https://proceedings.mlr.press/v48/bornschein16.html AB - Efficient unsupervised training and inference in deep generative models remains a challenging problem. One basic approach, called Helmholtz machine or Variational Autoencoder, involves training a top-down directed generative model together with a bottom-up auxiliary model used for approximate inference. Recent results indicate that better generative models can be obtained with better approximate inference procedures. Instead of improving the inference procedure, we here propose a new model, the bidirectional Helmholtz machine, which guarantees that the top-down and bottom-up distributions can efficiently invert each other. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the model distribution to be the geometric mean of these two. We present a lower-bound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottom-up and top-down approximate distributions is minimized. This approach results in state of the art generative models which prefer significantly deeper architectures while it allows for orders of magnitude more efficient likelihood estimation. ER -
APA
Bornschein, J., Shabanian, S., Fischer, A. & Bengio, Y.. (2016). Bidirectional Helmholtz Machines. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2511-2519 Available from https://proceedings.mlr.press/v48/bornschein16.html.

Related Material