Strudel: Learning Structured-Decomposable Probabilistic Circuits

Meihua Dang, Antonio Vergari, Guy Broeck
Proceedings of the 10th International Conference on Probabilistic Graphical Models, PMLR 138:137-148, 2020.

Abstract

Probabilistic circuits (PCs) represent a probability distribution as a computational graph. Enforcing structural properties on these graphs guarantees that several inference scenarios become tractable. Among these properties, structured decomposability is a particularly appealing one: it enables the efficient and exact computations of the probability of complex logical formulas, and can be used to reason about the expected output of certain predictive models under missing data. This paper proposes Strudel, a simple, fast and accurate learning algorithm for structured-decomposable PCs. Compared to prior work for learning structured-decomposable PCs, Strudel delivers more accurate single PC models in fewer iterations, and dramatically scales learning when building ensembles of PCs. It achieves this scalability by exploiting another structural property of PCs, called determinism, and by sharing the same computational graph across mixture components. We show these advantages on standard density estimation benchmarks and challenging inference scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v138-dang20a, title = {Strudel: Learning Structured-Decomposable Probabilistic Circuits}, author = {Dang, Meihua and Vergari, Antonio and Van den Broeck, Guy}, booktitle = {Proceedings of the 10th International Conference on Probabilistic Graphical Models}, pages = {137--148}, year = {2020}, editor = {Jaeger, Manfred and Nielsen, Thomas Dyhre}, volume = {138}, series = {Proceedings of Machine Learning Research}, month = {23--25 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v138/dang20a/dang20a.pdf}, url = {https://proceedings.mlr.press/v138/dang20a.html}, abstract = { Probabilistic circuits (PCs) represent a probability distribution as a computational graph. Enforcing structural properties on these graphs guarantees that several inference scenarios become tractable. Among these properties, structured decomposability is a particularly appealing one: it enables the efficient and exact computations of the probability of complex logical formulas, and can be used to reason about the expected output of certain predictive models under missing data. This paper proposes Strudel, a simple, fast and accurate learning algorithm for structured-decomposable PCs. Compared to prior work for learning structured-decomposable PCs, Strudel delivers more accurate single PC models in fewer iterations, and dramatically scales learning when building ensembles of PCs. It achieves this scalability by exploiting another structural property of PCs, called determinism, and by sharing the same computational graph across mixture components. We show these advantages on standard density estimation benchmarks and challenging inference scenarios.} }
Endnote
%0 Conference Paper %T Strudel: Learning Structured-Decomposable Probabilistic Circuits %A Meihua Dang %A Antonio Vergari %A Guy Broeck %B Proceedings of the 10th International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2020 %E Manfred Jaeger %E Thomas Dyhre Nielsen %F pmlr-v138-dang20a %I PMLR %P 137--148 %U https://proceedings.mlr.press/v138/dang20a.html %V 138 %X Probabilistic circuits (PCs) represent a probability distribution as a computational graph. Enforcing structural properties on these graphs guarantees that several inference scenarios become tractable. Among these properties, structured decomposability is a particularly appealing one: it enables the efficient and exact computations of the probability of complex logical formulas, and can be used to reason about the expected output of certain predictive models under missing data. This paper proposes Strudel, a simple, fast and accurate learning algorithm for structured-decomposable PCs. Compared to prior work for learning structured-decomposable PCs, Strudel delivers more accurate single PC models in fewer iterations, and dramatically scales learning when building ensembles of PCs. It achieves this scalability by exploiting another structural property of PCs, called determinism, and by sharing the same computational graph across mixture components. We show these advantages on standard density estimation benchmarks and challenging inference scenarios.
APA
Dang, M., Vergari, A. & Broeck, G.. (2020). Strudel: Learning Structured-Decomposable Probabilistic Circuits. Proceedings of the 10th International Conference on Probabilistic Graphical Models, in Proceedings of Machine Learning Research 138:137-148 Available from https://proceedings.mlr.press/v138/dang20a.html.

Related Material