Efficient inference in persistent Dynamic Bayesian Networks

Tomáš Šingliar, Denver H. Dash
Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, PMLR R6:494-502, 2008.

Abstract

Numerous temporal inference tasks such as fault monitoring and anomaly detection exhibit a persistence property: for example, if something breaks, it stays broken until an intervention. When modeled as a Dynamic Bayesian Network, persistence adds dependencies between adjacent time slices, often making exact inference over time intractable using standard inference algorithms. However, we show that persistence implies a regular structure that can be exploited for efficient inference. We present three successively more general classes of models: persistent causal chains (PCCs), persistent causal trees (PCTs) and persistent polytrees (PPTs), and the corresponding exact inference algorithms that exploit persistence. We show that analytic asymptotic bounds for our algorithms compare favorably to junction tree inference; and we demonstrate empirically that we can perform exact smoothing on the order of 100 times faster than the approximate Boyen-Koller method on randomly generated instances of persistent tree models. We also show how to handle non-persistent variables and how persistence can be exploited effectively for approximate filtering.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR6-singliar08a, title = {Efficient inference in persistent Dynamic Bayesian Networks}, author = {\v{S}ingliar, Tom\'{a}\v{s} and Dash, Denver H.}, booktitle = {Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence}, pages = {494--502}, year = {2008}, editor = {McAllester, David A. and Myllymäki, Petri}, volume = {R6}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/r6/main/assets/singliar08a/singliar08a.pdf}, url = {https://proceedings.mlr.press/r6/singliar08a.html}, abstract = {Numerous temporal inference tasks such as fault monitoring and anomaly detection exhibit a persistence property: for example, if something breaks, it stays broken until an intervention. When modeled as a Dynamic Bayesian Network, persistence adds dependencies between adjacent time slices, often making exact inference over time intractable using standard inference algorithms. However, we show that persistence implies a regular structure that can be exploited for efficient inference. We present three successively more general classes of models: persistent causal chains (PCCs), persistent causal trees (PCTs) and persistent polytrees (PPTs), and the corresponding exact inference algorithms that exploit persistence. We show that analytic asymptotic bounds for our algorithms compare favorably to junction tree inference; and we demonstrate empirically that we can perform exact smoothing on the order of 100 times faster than the approximate Boyen-Koller method on randomly generated instances of persistent tree models. We also show how to handle non-persistent variables and how persistence can be exploited effectively for approximate filtering.}, note = {Reissued by PMLR on 09 October 2024.} }
Endnote
%0 Conference Paper %T Efficient inference in persistent Dynamic Bayesian Networks %A Tomáš Šingliar %A Denver H. Dash %B Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2008 %E David A. McAllester %E Petri Myllymäki %F pmlr-vR6-singliar08a %I PMLR %P 494--502 %U https://proceedings.mlr.press/r6/singliar08a.html %V R6 %X Numerous temporal inference tasks such as fault monitoring and anomaly detection exhibit a persistence property: for example, if something breaks, it stays broken until an intervention. When modeled as a Dynamic Bayesian Network, persistence adds dependencies between adjacent time slices, often making exact inference over time intractable using standard inference algorithms. However, we show that persistence implies a regular structure that can be exploited for efficient inference. We present three successively more general classes of models: persistent causal chains (PCCs), persistent causal trees (PCTs) and persistent polytrees (PPTs), and the corresponding exact inference algorithms that exploit persistence. We show that analytic asymptotic bounds for our algorithms compare favorably to junction tree inference; and we demonstrate empirically that we can perform exact smoothing on the order of 100 times faster than the approximate Boyen-Koller method on randomly generated instances of persistent tree models. We also show how to handle non-persistent variables and how persistence can be exploited effectively for approximate filtering. %Z Reissued by PMLR on 09 October 2024.
APA
Šingliar, T. & Dash, D.H.. (2008). Efficient inference in persistent Dynamic Bayesian Networks. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research R6:494-502 Available from https://proceedings.mlr.press/r6/singliar08a.html. Reissued by PMLR on 09 October 2024.

Related Material