Backoff methods for estimating parameters of a Bayesian network

Wray Buntine
; Proceedings of The 3rd International Workshop on Advanced Methodologies for Bayesian Networks, PMLR 73:3-3, 2017.

Abstract

Various authors have highlighted inadequacies of BDeu type scores and this problem is shared in parameter estimation. Basically, Laplace estimates work poorly, at least because setting the prior concentration is challenging. In 1997, Freidman et al suggested a simple backoff approach for Bayesian network classifiers (BNCs). Backoff methods dominate in in n-gram language models, with modified Kneser-Ney smoothing, being the best known, and a Bayesian variant exists in the form of Pitman-Yor process language models from Teh in 2006. In this talk we will present some results on using backoff methods for Bayes network classifiers and Bayesian networks generally. For BNCs at least, the improvements are dramatic and alleviate some of the issues of choosing too dense a network.

Cite this Paper


BibTeX
@InProceedings{pmlr-v73-buntine17a, title = {Backoff methods for estimating parameters of a Bayesian network}, author = {Wray Buntine}, booktitle = {Proceedings of The 3rd International Workshop on Advanced Methodologies for Bayesian Networks}, pages = {3--3}, year = {2017}, editor = {Antti Hyttinen and Joe Suzuki and Brandon Malone}, volume = {73}, series = {Proceedings of Machine Learning Research}, month = {20--22 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v73/buntine17a/buntine17a.pdf}, url = {http://proceedings.mlr.press/v73/buntine17a.html}, abstract = {Various authors have highlighted inadequacies of BDeu type scores and this problem is shared in parameter estimation. Basically, Laplace estimates work poorly, at least because setting the prior concentration is challenging. In 1997, Freidman et al suggested a simple backoff approach for Bayesian network classifiers (BNCs). Backoff methods dominate in in n-gram language models, with modified Kneser-Ney smoothing, being the best known, and a Bayesian variant exists in the form of Pitman-Yor process language models from Teh in 2006. In this talk we will present some results on using backoff methods for Bayes network classifiers and Bayesian networks generally. For BNCs at least, the improvements are dramatic and alleviate some of the issues of choosing too dense a network. } }
Endnote
%0 Conference Paper %T Backoff methods for estimating parameters of a Bayesian network %A Wray Buntine %B Proceedings of The 3rd International Workshop on Advanced Methodologies for Bayesian Networks %C Proceedings of Machine Learning Research %D 2017 %E Antti Hyttinen %E Joe Suzuki %E Brandon Malone %F pmlr-v73-buntine17a %I PMLR %J Proceedings of Machine Learning Research %P 3--3 %U http://proceedings.mlr.press %V 73 %W PMLR %X Various authors have highlighted inadequacies of BDeu type scores and this problem is shared in parameter estimation. Basically, Laplace estimates work poorly, at least because setting the prior concentration is challenging. In 1997, Freidman et al suggested a simple backoff approach for Bayesian network classifiers (BNCs). Backoff methods dominate in in n-gram language models, with modified Kneser-Ney smoothing, being the best known, and a Bayesian variant exists in the form of Pitman-Yor process language models from Teh in 2006. In this talk we will present some results on using backoff methods for Bayes network classifiers and Bayesian networks generally. For BNCs at least, the improvements are dramatic and alleviate some of the issues of choosing too dense a network.
APA
Buntine, W.. (2017). Backoff methods for estimating parameters of a Bayesian network. Proceedings of The 3rd International Workshop on Advanced Methodologies for Bayesian Networks, in PMLR 73:3-3

Related Material