Who Learns Better Bayesian Network Structures: Constraint-Based, Score-based or Hybrid Algorithms?

Marco Scutari, Catharina Elisabeth Graafland, José Manuel Gutiérrez
Proceedings of the Ninth International Conference on Probabilistic Graphical Models, PMLR 72:416-427, 2018.

Abstract

The literature groups algorithms to learn the structure of Bayesian networks from data in three separate classes: constraint-based algorithms, which use conditional independence tests to learn the dependence structure of the data; score-based algorithms, which use goodness-of-fit scores as objective functions to maximise; and hybrid algorithms that combine both approaches. Famously, Cowell (2001) showed that algorithms in the first two classes learn the same structures when the topological ordering of the network is known and we use entropy to assess conditional independence and goodness of fit. In this paper we address the complementary question: how do these classes of algorithms perform outside of the assumptions above? We approach this question by recognising that structure learning is defined by the combination of a statistical criterion and an algorithm that determines how the criterion is applied to the data. Removing the confounding effect of different choices for the statistical criterion, we find using both simulated and real-world data that constraint-based algorithms do not appear to be more efficient or more sensitive to errors than score-based algorithms; and that hybrid algorithms are not faster or more accurate than constraint-based algorithms. This suggests that commonly held beliefs on structure learning in the literature are strongly influenced by the choice of particular statistical criteria rather than just properties of the algorithms themselves.

Cite this Paper


BibTeX
@InProceedings{pmlr-v72-scutari18a, title = {Who Learns Better Bayesian Network Structures: Constraint-Based, Score-based or Hybrid Algorithms?}, author = {Scutari, Marco and Graafland, Catharina Elisabeth and Guti{\'e}rrez, Jos{\'e} Manuel}, booktitle = {Proceedings of the Ninth International Conference on Probabilistic Graphical Models}, pages = {416--427}, year = {2018}, editor = {Kratochvíl, Václav and Studený, Milan}, volume = {72}, series = {Proceedings of Machine Learning Research}, month = {11--14 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v72/scutari18a/scutari18a.pdf}, url = {https://proceedings.mlr.press/v72/scutari18a.html}, abstract = {The literature groups algorithms to learn the structure of Bayesian networks from data in three separate classes: constraint-based algorithms, which use conditional independence tests to learn the dependence structure of the data; score-based algorithms, which use goodness-of-fit scores as objective functions to maximise; and hybrid algorithms that combine both approaches. Famously, Cowell (2001) showed that algorithms in the first two classes learn the same structures when the topological ordering of the network is known and we use entropy to assess conditional independence and goodness of fit. In this paper we address the complementary question: how do these classes of algorithms perform outside of the assumptions above? We approach this question by recognising that structure learning is defined by the combination of a statistical criterion and an algorithm that determines how the criterion is applied to the data. Removing the confounding effect of different choices for the statistical criterion, we find using both simulated and real-world data that constraint-based algorithms do not appear to be more efficient or more sensitive to errors than score-based algorithms; and that hybrid algorithms are not faster or more accurate than constraint-based algorithms. This suggests that commonly held beliefs on structure learning in the literature are strongly influenced by the choice of particular statistical criteria rather than just properties of the algorithms themselves.} }
Endnote
%0 Conference Paper %T Who Learns Better Bayesian Network Structures: Constraint-Based, Score-based or Hybrid Algorithms? %A Marco Scutari %A Catharina Elisabeth Graafland %A José Manuel Gutiérrez %B Proceedings of the Ninth International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2018 %E Václav Kratochvíl %E Milan Studený %F pmlr-v72-scutari18a %I PMLR %P 416--427 %U https://proceedings.mlr.press/v72/scutari18a.html %V 72 %X The literature groups algorithms to learn the structure of Bayesian networks from data in three separate classes: constraint-based algorithms, which use conditional independence tests to learn the dependence structure of the data; score-based algorithms, which use goodness-of-fit scores as objective functions to maximise; and hybrid algorithms that combine both approaches. Famously, Cowell (2001) showed that algorithms in the first two classes learn the same structures when the topological ordering of the network is known and we use entropy to assess conditional independence and goodness of fit. In this paper we address the complementary question: how do these classes of algorithms perform outside of the assumptions above? We approach this question by recognising that structure learning is defined by the combination of a statistical criterion and an algorithm that determines how the criterion is applied to the data. Removing the confounding effect of different choices for the statistical criterion, we find using both simulated and real-world data that constraint-based algorithms do not appear to be more efficient or more sensitive to errors than score-based algorithms; and that hybrid algorithms are not faster or more accurate than constraint-based algorithms. This suggests that commonly held beliefs on structure learning in the literature are strongly influenced by the choice of particular statistical criteria rather than just properties of the algorithms themselves.
APA
Scutari, M., Graafland, C.E. & Gutiérrez, J.M.. (2018). Who Learns Better Bayesian Network Structures: Constraint-Based, Score-based or Hybrid Algorithms?. Proceedings of the Ninth International Conference on Probabilistic Graphical Models, in Proceedings of Machine Learning Research 72:416-427 Available from https://proceedings.mlr.press/v72/scutari18a.html.

Related Material