On the Relative Expressiveness of Bayesian and Neural Networks

Arthur Choi, Adnan Darwiche
; Proceedings of the Ninth International Conference on Probabilistic Graphical Models, PMLR 72:157-168, 2018.

Abstract

A neural network computes a function. A central property of neural networks is that they are “universal approximators:” for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by Bayesian network queries, showing that the former are more expressive than the latter. Moreover, we propose a simple augmentation to Bayesian networks (a testing operator), which enables their queries to become “universal approximators” as well.

Cite this Paper


BibTeX
@InProceedings{pmlr-v72-choi18a, title = {On the Relative Expressiveness of Bayesian and Neural Networks}, author = {Choi, Arthur and Darwiche, Adnan}, booktitle = {Proceedings of the Ninth International Conference on Probabilistic Graphical Models}, pages = {157--168}, year = {2018}, editor = {Václav Kratochvíl and Milan Studený}, volume = {72}, series = {Proceedings of Machine Learning Research}, address = {Prague, Czech Republic}, month = {11--14 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v72/choi18a/choi18a.pdf}, url = {http://proceedings.mlr.press/v72/choi18a.html}, abstract = {A neural network computes a function. A central property of neural networks is that they are “universal approximators:” for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by Bayesian network queries, showing that the former are more expressive than the latter. Moreover, we propose a simple augmentation to Bayesian networks (a testing operator), which enables their queries to become “universal approximators” as well.} }
Endnote
%0 Conference Paper %T On the Relative Expressiveness of Bayesian and Neural Networks %A Arthur Choi %A Adnan Darwiche %B Proceedings of the Ninth International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2018 %E Václav Kratochvíl %E Milan Studený %F pmlr-v72-choi18a %I PMLR %J Proceedings of Machine Learning Research %P 157--168 %U http://proceedings.mlr.press %V 72 %W PMLR %X A neural network computes a function. A central property of neural networks is that they are “universal approximators:” for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by Bayesian network queries, showing that the former are more expressive than the latter. Moreover, we propose a simple augmentation to Bayesian networks (a testing operator), which enables their queries to become “universal approximators” as well.
APA
Choi, A. & Darwiche, A.. (2018). On the Relative Expressiveness of Bayesian and Neural Networks. Proceedings of the Ninth International Conference on Probabilistic Graphical Models, in PMLR 72:157-168

Related Material