Symmetric Variational Autoencoder and Connections to Adversarial Learning

Liqun Chen, Shuyang Dai, Yunchen Pu, Erjin Zhou, Chunyuan Li, Qinliang Su, Changyou Chen, Lawrence Carin
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:661-669, 2018.

Abstract

A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback- Leibler divergence. It is demonstrated that learn- ing of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-chen18b, title = {Symmetric Variational Autoencoder and Connections to Adversarial Learning}, author = {Chen, Liqun and Dai, Shuyang and Pu, Yunchen and Zhou, Erjin and Li, Chunyuan and Su, Qinliang and Chen, Changyou and Carin, Lawrence}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {661--669}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/chen18b/chen18b.pdf}, url = {https://proceedings.mlr.press/v84/chen18b.html}, abstract = {A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback- Leibler divergence. It is demonstrated that learn- ing of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.} }
Endnote
%0 Conference Paper %T Symmetric Variational Autoencoder and Connections to Adversarial Learning %A Liqun Chen %A Shuyang Dai %A Yunchen Pu %A Erjin Zhou %A Chunyuan Li %A Qinliang Su %A Changyou Chen %A Lawrence Carin %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-chen18b %I PMLR %P 661--669 %U https://proceedings.mlr.press/v84/chen18b.html %V 84 %X A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback- Leibler divergence. It is demonstrated that learn- ing of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.
APA
Chen, L., Dai, S., Pu, Y., Zhou, E., Li, C., Su, Q., Chen, C. & Carin, L.. (2018). Symmetric Variational Autoencoder and Connections to Adversarial Learning. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:661-669 Available from https://proceedings.mlr.press/v84/chen18b.html.

Related Material