Approximation properties of DBNs with binary hidden units and real-valued visible units

Oswin Krause, Asja Fischer, Tobias Glasmachers, Christian Igel
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):419-426, 2013.

Abstract

Deep belief networks (DBNs) can approximate any distribution over fixed-length binary vectors. However, DBNs are frequently applied to model real-valued data, and so far little is known about their representational power in this case. We analyze the approximation properties of DBNs with two layers of binary hidden units and visible units with conditional distributions from the exponential family. It is shown that these DBNs can, under mild assumptions, model any additive mixture of distributions from the exponential family with independent variables. An arbitrarily good approximation in terms of Kullback-Leibler divergence of an m-dimensional mixture distribution with n components can be achieved by a DBN with m visible variables and n and n+1 hidden variables in the first and second hidden layer, respectively. Furthermore, relevant infinite mixtures can be approximated arbitrarily well by a DBN with a finite number of neurons. This includes the important special case of an infinite mixture of Gaussian distributions with fixed variance restricted to a compact domain, which in turn can approximate any strictly positive density over this domain.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-krause13, title = {Approximation properties of {DBNs} with binary hidden units and real-valued visible units}, author = {Krause, Oswin and Fischer, Asja and Glasmachers, Tobias and Igel, Christian}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {419--426}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/krause13.pdf}, url = {https://proceedings.mlr.press/v28/krause13.html}, abstract = {Deep belief networks (DBNs) can approximate any distribution over fixed-length binary vectors. However, DBNs are frequently applied to model real-valued data, and so far little is known about their representational power in this case. We analyze the approximation properties of DBNs with two layers of binary hidden units and visible units with conditional distributions from the exponential family. It is shown that these DBNs can, under mild assumptions, model any additive mixture of distributions from the exponential family with independent variables. An arbitrarily good approximation in terms of Kullback-Leibler divergence of an m-dimensional mixture distribution with n components can be achieved by a DBN with m visible variables and n and n+1 hidden variables in the first and second hidden layer, respectively. Furthermore, relevant infinite mixtures can be approximated arbitrarily well by a DBN with a finite number of neurons. This includes the important special case of an infinite mixture of Gaussian distributions with fixed variance restricted to a compact domain, which in turn can approximate any strictly positive density over this domain.} }
Endnote
%0 Conference Paper %T Approximation properties of DBNs with binary hidden units and real-valued visible units %A Oswin Krause %A Asja Fischer %A Tobias Glasmachers %A Christian Igel %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-krause13 %I PMLR %P 419--426 %U https://proceedings.mlr.press/v28/krause13.html %V 28 %N 1 %X Deep belief networks (DBNs) can approximate any distribution over fixed-length binary vectors. However, DBNs are frequently applied to model real-valued data, and so far little is known about their representational power in this case. We analyze the approximation properties of DBNs with two layers of binary hidden units and visible units with conditional distributions from the exponential family. It is shown that these DBNs can, under mild assumptions, model any additive mixture of distributions from the exponential family with independent variables. An arbitrarily good approximation in terms of Kullback-Leibler divergence of an m-dimensional mixture distribution with n components can be achieved by a DBN with m visible variables and n and n+1 hidden variables in the first and second hidden layer, respectively. Furthermore, relevant infinite mixtures can be approximated arbitrarily well by a DBN with a finite number of neurons. This includes the important special case of an infinite mixture of Gaussian distributions with fixed variance restricted to a compact domain, which in turn can approximate any strictly positive density over this domain.
RIS
TY - CPAPER TI - Approximation properties of DBNs with binary hidden units and real-valued visible units AU - Oswin Krause AU - Asja Fischer AU - Tobias Glasmachers AU - Christian Igel BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-krause13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 419 EP - 426 L1 - http://proceedings.mlr.press/v28/krause13.pdf UR - https://proceedings.mlr.press/v28/krause13.html AB - Deep belief networks (DBNs) can approximate any distribution over fixed-length binary vectors. However, DBNs are frequently applied to model real-valued data, and so far little is known about their representational power in this case. We analyze the approximation properties of DBNs with two layers of binary hidden units and visible units with conditional distributions from the exponential family. It is shown that these DBNs can, under mild assumptions, model any additive mixture of distributions from the exponential family with independent variables. An arbitrarily good approximation in terms of Kullback-Leibler divergence of an m-dimensional mixture distribution with n components can be achieved by a DBN with m visible variables and n and n+1 hidden variables in the first and second hidden layer, respectively. Furthermore, relevant infinite mixtures can be approximated arbitrarily well by a DBN with a finite number of neurons. This includes the important special case of an infinite mixture of Gaussian distributions with fixed variance restricted to a compact domain, which in turn can approximate any strictly positive density over this domain. ER -
APA
Krause, O., Fischer, A., Glasmachers, T. & Igel, C.. (2013). Approximation properties of DBNs with binary hidden units and real-valued visible units. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):419-426 Available from https://proceedings.mlr.press/v28/krause13.html.

Related Material