Who Learns Better Bayesian Network Structures: ConstraintBased, Scorebased or Hybrid Algorithms?
[edit]
Proceedings of the Ninth International Conference on Probabilistic Graphical Models, PMLR 72:416427, 2018.
Abstract
The literature groups algorithms to learn the structure of Bayesian networks from data in three separate classes: constraintbased algorithms, which use conditional independence tests to learn the dependence structure of the data; scorebased algorithms, which use goodnessoffit scores as objective functions to maximise; and hybrid algorithms that combine both approaches. Famously, Cowell (2001) showed that algorithms in the first two classes learn the same structures when the topological ordering of the network is known and we use entropy to assess conditional independence and goodness of fit. In this paper we address the complementary question: how do these classes of algorithms perform outside of the assumptions above? We approach this question by recognising that structure learning is defined by the combination of a statistical criterion and an algorithm that determines how the criterion is applied to the data. Removing the confounding effect of different choices for the statistical criterion, we find using both simulated and realworld data that constraintbased algorithms do not appear to be more efficient or more sensitive to errors than scorebased algorithms; and that hybrid algorithms are not faster or more accurate than constraintbased algorithms. This suggests that commonly held beliefs on structure learning in the literature are strongly influenced by the choice of particular statistical criteria rather than just properties of the algorithms themselves.
Related Material


