Undirected Graphical Models as Approximate Posteriors

Arash Vahdat, Evgeny Andriyash, William Macready
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9680-9689, 2020.

Abstract

The representation of the approximate posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the approximate posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected approximate posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machines as approximate posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models. Our implementation is publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-vahdat20a, title = {Undirected Graphical Models as Approximate Posteriors}, author = {Vahdat, Arash and Andriyash, Evgeny and Macready, William}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9680--9689}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/vahdat20a/vahdat20a.pdf}, url = {https://proceedings.mlr.press/v119/vahdat20a.html}, abstract = {The representation of the approximate posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the approximate posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected approximate posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machines as approximate posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models. Our implementation is publicly available.} }
Endnote
%0 Conference Paper %T Undirected Graphical Models as Approximate Posteriors %A Arash Vahdat %A Evgeny Andriyash %A William Macready %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-vahdat20a %I PMLR %P 9680--9689 %U https://proceedings.mlr.press/v119/vahdat20a.html %V 119 %X The representation of the approximate posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the approximate posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected approximate posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machines as approximate posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models. Our implementation is publicly available.
APA
Vahdat, A., Andriyash, E. & Macready, W.. (2020). Undirected Graphical Models as Approximate Posteriors. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9680-9689 Available from https://proceedings.mlr.press/v119/vahdat20a.html.

Related Material