Quotient Normalized Maximum Likelihood Criterion for Learning Bayesian Network Structures

Tomi Silander, Janne Leppä-aho, Elias Jääsaari, Teemu Roos
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:948-957, 2018.

Abstract

We introduce an information theoretic criterion for Bayesian network structure learning which we call quotient normalized maximum likelihood (qNML). In contrast to the closely related factorized normalized maximum likelihood criterion, qNML satisfies the property of score equivalence. It is also decomposable and completely free of adjustable hyperparameters. For practical computations, we identify a remarkably accurate approximation proposed earlier by Szpankowski and Weinberger. Experiments on both simulated and real data demonstrate that the new criterion leads to parsimonious models with good predictive accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-silander18a, title = {Quotient Normalized Maximum Likelihood Criterion for Learning Bayesian Network Structures}, author = {Silander, Tomi and Leppä-aho, Janne and Jääsaari, Elias and Roos, Teemu}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {948--957}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/silander18a/silander18a.pdf}, url = {https://proceedings.mlr.press/v84/silander18a.html}, abstract = {We introduce an information theoretic criterion for Bayesian network structure learning which we call quotient normalized maximum likelihood (qNML). In contrast to the closely related factorized normalized maximum likelihood criterion, qNML satisfies the property of score equivalence. It is also decomposable and completely free of adjustable hyperparameters. For practical computations, we identify a remarkably accurate approximation proposed earlier by Szpankowski and Weinberger. Experiments on both simulated and real data demonstrate that the new criterion leads to parsimonious models with good predictive accuracy.} }
Endnote
%0 Conference Paper %T Quotient Normalized Maximum Likelihood Criterion for Learning Bayesian Network Structures %A Tomi Silander %A Janne Leppä-aho %A Elias Jääsaari %A Teemu Roos %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-silander18a %I PMLR %P 948--957 %U https://proceedings.mlr.press/v84/silander18a.html %V 84 %X We introduce an information theoretic criterion for Bayesian network structure learning which we call quotient normalized maximum likelihood (qNML). In contrast to the closely related factorized normalized maximum likelihood criterion, qNML satisfies the property of score equivalence. It is also decomposable and completely free of adjustable hyperparameters. For practical computations, we identify a remarkably accurate approximation proposed earlier by Szpankowski and Weinberger. Experiments on both simulated and real data demonstrate that the new criterion leads to parsimonious models with good predictive accuracy.
APA
Silander, T., Leppä-aho, J., Jääsaari, E. & Roos, T.. (2018). Quotient Normalized Maximum Likelihood Criterion for Learning Bayesian Network Structures. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:948-957 Available from https://proceedings.mlr.press/v84/silander18a.html.

Related Material