Estimating the Mixing Time of Ergodic Markov Chains

Geoffrey Wolfer, Aryeh Kontorovich
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3120-3159, 2019.

Abstract

We address the problem of estimating the mixing time $t_{\mathsf{mix}}$ of an arbitrary ergodic finite Markov chain from a single trajectory of length $m$. The reversible case was addressed by Hsu et al. [2018+], who left the general case as an open problem. In the reversible case, the analysis is greatly facilitated by the fact that the Markov operator is self-adjoint, and Weyl’s inequality allows for a dimension-free perturbation analysis of the empirical eigenvalues. As Hsu et al. point out, in the absence of reversibility (and hence, the non-symmetry of the pair probabilities matrix), the existing perturbation analysis has a worst-case exponential dependence on the number of states $d$. Furthermore, even if an eigenvalue perturbation analysis with better dependence on $d$ were available, in the non-reversible case the connection between the spectral gap and the mixing time is not nearly as straightforward as in the reversible case. Our key insight is to estimate the pseudo-spectral gap instead, which allows us to overcome the loss of self-adjointness and to achieve a polynomial dependence on $d$ and the minimal stationary probability $\pi_\star$. Additionally, in the reversible case, we obtain simultaneous nearly (up to logarithmic factors) minimax rates in $t_{\mathsf{mix}}$ and precision $\varepsilon$, closing a gap in Hsu et al., who treated $\varepsilon$ as constant in the lower bounds. Finally, we construct fully empirical confidence intervals for the pseudo-spectral gap, which shrink to zero at a rate of roughly $1/\sqrt m$, and improve the state of the art in even the reversible case.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-wolfer19a, title = {Estimating the Mixing Time of Ergodic Markov Chains}, author = {Wolfer, Geoffrey and Kontorovich, Aryeh}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {3120--3159}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/wolfer19a/wolfer19a.pdf}, url = {https://proceedings.mlr.press/v99/wolfer19a.html}, abstract = {We address the problem of estimating the mixing time $t_{\mathsf{mix}}$ of an arbitrary ergodic finite Markov chain from a single trajectory of length $m$. The reversible case was addressed by Hsu et al. [2018+], who left the general case as an open problem. In the reversible case, the analysis is greatly facilitated by the fact that the Markov operator is self-adjoint, and Weyl’s inequality allows for a dimension-free perturbation analysis of the empirical eigenvalues. As Hsu et al. point out, in the absence of reversibility (and hence, the non-symmetry of the pair probabilities matrix), the existing perturbation analysis has a worst-case exponential dependence on the number of states $d$. Furthermore, even if an eigenvalue perturbation analysis with better dependence on $d$ were available, in the non-reversible case the connection between the spectral gap and the mixing time is not nearly as straightforward as in the reversible case. Our key insight is to estimate the pseudo-spectral gap instead, which allows us to overcome the loss of self-adjointness and to achieve a polynomial dependence on $d$ and the minimal stationary probability $\pi_\star$. Additionally, in the reversible case, we obtain simultaneous nearly (up to logarithmic factors) minimax rates in $t_{\mathsf{mix}}$ and precision $\varepsilon$, closing a gap in Hsu et al., who treated $\varepsilon$ as constant in the lower bounds. Finally, we construct fully empirical confidence intervals for the pseudo-spectral gap, which shrink to zero at a rate of roughly $1/\sqrt m$, and improve the state of the art in even the reversible case.} }
Endnote
%0 Conference Paper %T Estimating the Mixing Time of Ergodic Markov Chains %A Geoffrey Wolfer %A Aryeh Kontorovich %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-wolfer19a %I PMLR %P 3120--3159 %U https://proceedings.mlr.press/v99/wolfer19a.html %V 99 %X We address the problem of estimating the mixing time $t_{\mathsf{mix}}$ of an arbitrary ergodic finite Markov chain from a single trajectory of length $m$. The reversible case was addressed by Hsu et al. [2018+], who left the general case as an open problem. In the reversible case, the analysis is greatly facilitated by the fact that the Markov operator is self-adjoint, and Weyl’s inequality allows for a dimension-free perturbation analysis of the empirical eigenvalues. As Hsu et al. point out, in the absence of reversibility (and hence, the non-symmetry of the pair probabilities matrix), the existing perturbation analysis has a worst-case exponential dependence on the number of states $d$. Furthermore, even if an eigenvalue perturbation analysis with better dependence on $d$ were available, in the non-reversible case the connection between the spectral gap and the mixing time is not nearly as straightforward as in the reversible case. Our key insight is to estimate the pseudo-spectral gap instead, which allows us to overcome the loss of self-adjointness and to achieve a polynomial dependence on $d$ and the minimal stationary probability $\pi_\star$. Additionally, in the reversible case, we obtain simultaneous nearly (up to logarithmic factors) minimax rates in $t_{\mathsf{mix}}$ and precision $\varepsilon$, closing a gap in Hsu et al., who treated $\varepsilon$ as constant in the lower bounds. Finally, we construct fully empirical confidence intervals for the pseudo-spectral gap, which shrink to zero at a rate of roughly $1/\sqrt m$, and improve the state of the art in even the reversible case.
APA
Wolfer, G. & Kontorovich, A.. (2019). Estimating the Mixing Time of Ergodic Markov Chains. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:3120-3159 Available from https://proceedings.mlr.press/v99/wolfer19a.html.

Related Material