[edit]

# Who Learns Better Bayesian Network Structures: Constraint-Based, Score-based or Hybrid Algorithms?

*Proceedings of the Ninth International Conference on Probabilistic Graphical Models*, PMLR 72:416-427, 2018.

#### Abstract

The literature groups algorithms to learn the structure of Bayesian networks from data in three separate classes:

*constraint-based algorithms*, which use conditional independence tests to learn the dependence structure of the data;*score-based algorithms*, which use goodness-of-fit scores as objective functions to maximise; and*hybrid algorithms*that combine both approaches. Famously, Cowell (2001) showed that algorithms in the first two classes learn the same structures when the topological ordering of the network is known and we use entropy to assess conditional independence and goodness of fit. In this paper we address the complementary question: how do these classes of algorithms perform outside of the assumptions above? We approach this question by recognising that structure learning is defined by the combination of a*statistical criterion*and an*algorithm*that determines how the criterion is applied to the data. Removing the confounding effect of different choices for the statistical criterion, we find using both simulated and real-world data that constraint-based algorithms do not appear to be more efficient or more sensitive to errors than score-based algorithms; and that hybrid algorithms are not faster or more accurate than constraint-based algorithms. This suggests that commonly held beliefs on structure learning in the literature are strongly influenced by the choice of particular statistical criteria rather than just properties of the algorithms themselves.