Variational Generative Stochastic Networks with Collaborative Shaping

Philip Bachman, Doina Precup
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1964-1972, 2015.

Abstract

We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning. We present empirical results on the MNIST and TFD datasets which show that our approach offers state-of-the-art performance, both quantitatively and from a qualitative point of view.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-bachman15, title = {Variational Generative Stochastic Networks with Collaborative Shaping}, author = {Bachman, Philip and Precup, Doina}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1964--1972}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/bachman15.pdf}, url = {https://proceedings.mlr.press/v37/bachman15.html}, abstract = {We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning. We present empirical results on the MNIST and TFD datasets which show that our approach offers state-of-the-art performance, both quantitatively and from a qualitative point of view.} }
Endnote
%0 Conference Paper %T Variational Generative Stochastic Networks with Collaborative Shaping %A Philip Bachman %A Doina Precup %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-bachman15 %I PMLR %P 1964--1972 %U https://proceedings.mlr.press/v37/bachman15.html %V 37 %X We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning. We present empirical results on the MNIST and TFD datasets which show that our approach offers state-of-the-art performance, both quantitatively and from a qualitative point of view.
RIS
TY - CPAPER TI - Variational Generative Stochastic Networks with Collaborative Shaping AU - Philip Bachman AU - Doina Precup BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-bachman15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1964 EP - 1972 L1 - http://proceedings.mlr.press/v37/bachman15.pdf UR - https://proceedings.mlr.press/v37/bachman15.html AB - We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over the behavior of the models, we add a regularization term inspired by techniques used for regularizing certain types of policy search in reinforcement learning. We present empirical results on the MNIST and TFD datasets which show that our approach offers state-of-the-art performance, both quantitatively and from a qualitative point of view. ER -
APA
Bachman, P. & Precup, D.. (2015). Variational Generative Stochastic Networks with Collaborative Shaping. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1964-1972 Available from https://proceedings.mlr.press/v37/bachman15.html.

Related Material