JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes

Jonathan Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash Mansinghka
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:693-701, 2015.

Abstract

Markov jump processes (MJPs) are used to model a wide range of phenomenon from disease progression to RNA path folding. However, existing methods suffer from a number of shortcomings: degenerate trajectories in the case of ML estimation of parametric models and poor inferential performance in the case of nonparametric models. We take a small-variance asymptotics (SVA) approach to overcome these limitations. We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models. In the parametric case we obtain a novel objective function which leads to non-degenerate trajectories. To derive the nonparametric version we introduce the gamma-gamma process, a novel extension to the gamma-exponential process. We propose algorithms for each of these formulations, which we call \emphJUMP-means. Our experiments demonstrate that JUMP-means is competitive with or outperforms widely used MJP inference approaches in terms of both speed and reconstruction accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-hugginsa15, title = {JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes}, author = {Huggins, Jonathan and Narasimhan, Karthik and Saeedi, Ardavan and Mansinghka, Vikash}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {693--701}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/hugginsa15.pdf}, url = {https://proceedings.mlr.press/v37/hugginsa15.html}, abstract = {Markov jump processes (MJPs) are used to model a wide range of phenomenon from disease progression to RNA path folding. However, existing methods suffer from a number of shortcomings: degenerate trajectories in the case of ML estimation of parametric models and poor inferential performance in the case of nonparametric models. We take a small-variance asymptotics (SVA) approach to overcome these limitations. We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models. In the parametric case we obtain a novel objective function which leads to non-degenerate trajectories. To derive the nonparametric version we introduce the gamma-gamma process, a novel extension to the gamma-exponential process. We propose algorithms for each of these formulations, which we call \emphJUMP-means. Our experiments demonstrate that JUMP-means is competitive with or outperforms widely used MJP inference approaches in terms of both speed and reconstruction accuracy.} }
Endnote
%0 Conference Paper %T JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes %A Jonathan Huggins %A Karthik Narasimhan %A Ardavan Saeedi %A Vikash Mansinghka %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-hugginsa15 %I PMLR %P 693--701 %U https://proceedings.mlr.press/v37/hugginsa15.html %V 37 %X Markov jump processes (MJPs) are used to model a wide range of phenomenon from disease progression to RNA path folding. However, existing methods suffer from a number of shortcomings: degenerate trajectories in the case of ML estimation of parametric models and poor inferential performance in the case of nonparametric models. We take a small-variance asymptotics (SVA) approach to overcome these limitations. We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models. In the parametric case we obtain a novel objective function which leads to non-degenerate trajectories. To derive the nonparametric version we introduce the gamma-gamma process, a novel extension to the gamma-exponential process. We propose algorithms for each of these formulations, which we call \emphJUMP-means. Our experiments demonstrate that JUMP-means is competitive with or outperforms widely used MJP inference approaches in terms of both speed and reconstruction accuracy.
RIS
TY - CPAPER TI - JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes AU - Jonathan Huggins AU - Karthik Narasimhan AU - Ardavan Saeedi AU - Vikash Mansinghka BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-hugginsa15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 693 EP - 701 L1 - http://proceedings.mlr.press/v37/hugginsa15.pdf UR - https://proceedings.mlr.press/v37/hugginsa15.html AB - Markov jump processes (MJPs) are used to model a wide range of phenomenon from disease progression to RNA path folding. However, existing methods suffer from a number of shortcomings: degenerate trajectories in the case of ML estimation of parametric models and poor inferential performance in the case of nonparametric models. We take a small-variance asymptotics (SVA) approach to overcome these limitations. We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models. In the parametric case we obtain a novel objective function which leads to non-degenerate trajectories. To derive the nonparametric version we introduce the gamma-gamma process, a novel extension to the gamma-exponential process. We propose algorithms for each of these formulations, which we call \emphJUMP-means. Our experiments demonstrate that JUMP-means is competitive with or outperforms widely used MJP inference approaches in terms of both speed and reconstruction accuracy. ER -
APA
Huggins, J., Narasimhan, K., Saeedi, A. & Mansinghka, V.. (2015). JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:693-701 Available from https://proceedings.mlr.press/v37/hugginsa15.html.

Related Material