Semi-Amortized Variational Autoencoders

Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, Alexander Rush
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2678-2687, 2018.

Abstract

Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-kim18e, title = {Semi-Amortized Variational Autoencoders}, author = {Kim, Yoon and Wiseman, Sam and Miller, Andrew and Sontag, David and Rush, Alexander}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2678--2687}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/kim18e/kim18e.pdf}, url = {https://proceedings.mlr.press/v80/kim18e.html}, abstract = {Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.} }
Endnote
%0 Conference Paper %T Semi-Amortized Variational Autoencoders %A Yoon Kim %A Sam Wiseman %A Andrew Miller %A David Sontag %A Alexander Rush %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-kim18e %I PMLR %P 2678--2687 %U https://proceedings.mlr.press/v80/kim18e.html %V 80 %X Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.
APA
Kim, Y., Wiseman, S., Miller, A., Sontag, D. & Rush, A.. (2018). Semi-Amortized Variational Autoencoders. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2678-2687 Available from https://proceedings.mlr.press/v80/kim18e.html.

Related Material