Gibbs sampling in factorized continuous-time Markov processes

Tal El-Hay, Nir Friedman, Raz Kupferman
Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, PMLR R6:169-178, 2008.

Abstract

A central task in many applications is reasoning about processes that change over continuous time. Continuous-Time Bayesian Networks is a general compact representation language for multi-component continuous-time processes. However, exact inference in such processes is exponential in the number of components, and thus infeasible for most models of interest. Here we develop a novel Gibbs sampling procedure for multi-component processes. This procedure iteratively samples a trajectory for one of the components given the remaining ones. We show how to perform exact sampling that adapts to the natural time scale of the sampled process. Moreover, we show that this sampling procedure naturally exploits the structure of the network to reduce the computational cost of each step. This procedure is the first that can provide asymptotically unbiased approximation in such processes.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR6-el-hay08a, title = {Gibbs sampling in factorized continuous-time Markov processes}, author = {El-Hay, Tal and Friedman, Nir and Kupferman, Raz}, booktitle = {Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence}, pages = {169--178}, year = {2008}, editor = {McAllester, David A. and Myllymäki, Petri}, volume = {R6}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/r6/main/assets/el-hay08a/el-hay08a.pdf}, url = {https://proceedings.mlr.press/r6/el-hay08a.html}, abstract = {A central task in many applications is reasoning about processes that change over continuous time. Continuous-Time Bayesian Networks is a general compact representation language for multi-component continuous-time processes. However, exact inference in such processes is exponential in the number of components, and thus infeasible for most models of interest. Here we develop a novel Gibbs sampling procedure for multi-component processes. This procedure iteratively samples a trajectory for one of the components given the remaining ones. We show how to perform exact sampling that adapts to the natural time scale of the sampled process. Moreover, we show that this sampling procedure naturally exploits the structure of the network to reduce the computational cost of each step. This procedure is the first that can provide asymptotically unbiased approximation in such processes.}, note = {Reissued by PMLR on 09 October 2024.} }
Endnote
%0 Conference Paper %T Gibbs sampling in factorized continuous-time Markov processes %A Tal El-Hay %A Nir Friedman %A Raz Kupferman %B Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2008 %E David A. McAllester %E Petri Myllymäki %F pmlr-vR6-el-hay08a %I PMLR %P 169--178 %U https://proceedings.mlr.press/r6/el-hay08a.html %V R6 %X A central task in many applications is reasoning about processes that change over continuous time. Continuous-Time Bayesian Networks is a general compact representation language for multi-component continuous-time processes. However, exact inference in such processes is exponential in the number of components, and thus infeasible for most models of interest. Here we develop a novel Gibbs sampling procedure for multi-component processes. This procedure iteratively samples a trajectory for one of the components given the remaining ones. We show how to perform exact sampling that adapts to the natural time scale of the sampled process. Moreover, we show that this sampling procedure naturally exploits the structure of the network to reduce the computational cost of each step. This procedure is the first that can provide asymptotically unbiased approximation in such processes. %Z Reissued by PMLR on 09 October 2024.
APA
El-Hay, T., Friedman, N. & Kupferman, R.. (2008). Gibbs sampling in factorized continuous-time Markov processes. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research R6:169-178 Available from https://proceedings.mlr.press/r6/el-hay08a.html. Reissued by PMLR on 09 October 2024.

Related Material