Deep AutoRegressive Networks

Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, Daan Wierstra
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1242-1250, 2014.

Abstract

We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-gregor14, title = {Deep AutoRegressive Networks}, author = {Gregor, Karol and Danihelka, Ivo and Mnih, Andriy and Blundell, Charles and Wierstra, Daan}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1242--1250}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/gregor14.pdf}, url = {https://proceedings.mlr.press/v32/gregor14.html}, abstract = {We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games.} }
Endnote
%0 Conference Paper %T Deep AutoRegressive Networks %A Karol Gregor %A Ivo Danihelka %A Andriy Mnih %A Charles Blundell %A Daan Wierstra %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-gregor14 %I PMLR %P 1242--1250 %U https://proceedings.mlr.press/v32/gregor14.html %V 32 %N 2 %X We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games.
RIS
TY - CPAPER TI - Deep AutoRegressive Networks AU - Karol Gregor AU - Ivo Danihelka AU - Andriy Mnih AU - Charles Blundell AU - Daan Wierstra BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-gregor14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1242 EP - 1250 L1 - http://proceedings.mlr.press/v32/gregor14.pdf UR - https://proceedings.mlr.press/v32/gregor14.html AB - We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling. We derive an efficient approximate parameter estimation method based on the minimum description length (MDL) principle, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference. We demonstrate state-of-the-art generative performance on a number of classic data sets: several UCI data sets, MNIST and Atari 2600 games. ER -
APA
Gregor, K., Danihelka, I., Mnih, A., Blundell, C. & Wierstra, D.. (2014). Deep AutoRegressive Networks. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1242-1250 Available from https://proceedings.mlr.press/v32/gregor14.html.

Related Material