Estimation of Markov Chain via Rank-Constrained Likelihood

Xudong Li, Mengdi Wang, Anru Zhang
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3033-3042, 2018.

Abstract

This paper studies the estimation of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rank-constrained likelihood maximization. Statistical upper bounds are provided for the Kullback-Leiber divergence and the $\ell_2$ risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-li18g, title = {Estimation of {M}arkov Chain via Rank-Constrained Likelihood}, author = {Li, Xudong and Wang, Mengdi and Zhang, Anru}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3033--3042}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/li18g/li18g.pdf}, url = {https://proceedings.mlr.press/v80/li18g.html}, abstract = {This paper studies the estimation of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rank-constrained likelihood maximization. Statistical upper bounds are provided for the Kullback-Leiber divergence and the $\ell_2$ risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.} }
Endnote
%0 Conference Paper %T Estimation of Markov Chain via Rank-Constrained Likelihood %A Xudong Li %A Mengdi Wang %A Anru Zhang %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-li18g %I PMLR %P 3033--3042 %U https://proceedings.mlr.press/v80/li18g.html %V 80 %X This paper studies the estimation of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rank-constrained likelihood maximization. Statistical upper bounds are provided for the Kullback-Leiber divergence and the $\ell_2$ risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.
APA
Li, X., Wang, M. & Zhang, A.. (2018). Estimation of Markov Chain via Rank-Constrained Likelihood. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3033-3042 Available from https://proceedings.mlr.press/v80/li18g.html.

Related Material