Mixing Time Estimation in Ergodic Markov Chains from a Single Trajectory with Contraction Methods

Geoffrey Wolfer
; Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:890-905, 2020.

Abstract

The mixing time $t_{\mathsf{mix}}$ of an ergodic Markov chain measures the rate of convergence towards its stationary distribution $\boldsymbol{\pi}$. We consider the problem of estimating $t_{\mathsf{mix}}$ from one single trajectory of $m$ observations $(X_1, …, X_m)$, in the case where the transition kernel $\boldsymbol{M}$ is unknown, a research program started by Hsu et al. [2015]. The community has so far focused primarily on leveraging spectral methods to estimate the relaxation time $t_{\mathsf{rel}}$ of a reversible Markov chain as a proxy for $t_{\mathsf{mix}}$. Although these techniques have recently been extended to tackle non-reversible chains, this general setting remains much less understood. Our new approach based on contraction methods is the first that aims at directly estimating $t_{\mathsf{mix}}$ up to multiplicative small universal constants instead of $t_{\mathsf{rel}}$. It does so by introducing a generalized version of Dobrushin’s contraction coefficient $\kappa_{\mathsf{gen}}$, which is shown to control the mixing time regardless of reversibility. We subsequently design fully data-dependent high confidence intervals around $\kappa_{\mathsf{gen}}$ that generally yield better convergence guarantees and are more practical than state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v117-wolfer20a, title = {Mixing Time Estimation in Ergodic Markov Chains from a Single Trajectory with Contraction Methods}, author = {Wolfer, Geoffrey}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {890--905}, year = {2020}, editor = {Aryeh Kontorovich and Gergely Neu}, volume = {117}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {08 Feb--11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/wolfer20a/wolfer20a.pdf}, url = {http://proceedings.mlr.press/v117/wolfer20a.html}, abstract = {The mixing time $t_{\mathsf{mix}}$ of an ergodic Markov chain measures the rate of convergence towards its stationary distribution $\boldsymbol{\pi}$. We consider the problem of estimating $t_{\mathsf{mix}}$ from one single trajectory of $m$ observations $(X_1, …, X_m)$, in the case where the transition kernel $\boldsymbol{M}$ is unknown, a research program started by Hsu et al. [2015]. The community has so far focused primarily on leveraging spectral methods to estimate the relaxation time $t_{\mathsf{rel}}$ of a reversible Markov chain as a proxy for $t_{\mathsf{mix}}$. Although these techniques have recently been extended to tackle non-reversible chains, this general setting remains much less understood. Our new approach based on contraction methods is the first that aims at directly estimating $t_{\mathsf{mix}}$ up to multiplicative small universal constants instead of $t_{\mathsf{rel}}$. It does so by introducing a generalized version of Dobrushin’s contraction coefficient $\kappa_{\mathsf{gen}}$, which is shown to control the mixing time regardless of reversibility. We subsequently design fully data-dependent high confidence intervals around $\kappa_{\mathsf{gen}}$ that generally yield better convergence guarantees and are more practical than state-of-the-art.} }
Endnote
%0 Conference Paper %T Mixing Time Estimation in Ergodic Markov Chains from a Single Trajectory with Contraction Methods %A Geoffrey Wolfer %B Proceedings of the 31st International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Aryeh Kontorovich %E Gergely Neu %F pmlr-v117-wolfer20a %I PMLR %J Proceedings of Machine Learning Research %P 890--905 %U http://proceedings.mlr.press %V 117 %W PMLR %X The mixing time $t_{\mathsf{mix}}$ of an ergodic Markov chain measures the rate of convergence towards its stationary distribution $\boldsymbol{\pi}$. We consider the problem of estimating $t_{\mathsf{mix}}$ from one single trajectory of $m$ observations $(X_1, …, X_m)$, in the case where the transition kernel $\boldsymbol{M}$ is unknown, a research program started by Hsu et al. [2015]. The community has so far focused primarily on leveraging spectral methods to estimate the relaxation time $t_{\mathsf{rel}}$ of a reversible Markov chain as a proxy for $t_{\mathsf{mix}}$. Although these techniques have recently been extended to tackle non-reversible chains, this general setting remains much less understood. Our new approach based on contraction methods is the first that aims at directly estimating $t_{\mathsf{mix}}$ up to multiplicative small universal constants instead of $t_{\mathsf{rel}}$. It does so by introducing a generalized version of Dobrushin’s contraction coefficient $\kappa_{\mathsf{gen}}$, which is shown to control the mixing time regardless of reversibility. We subsequently design fully data-dependent high confidence intervals around $\kappa_{\mathsf{gen}}$ that generally yield better convergence guarantees and are more practical than state-of-the-art.
APA
Wolfer, G.. (2020). Mixing Time Estimation in Ergodic Markov Chains from a Single Trajectory with Contraction Methods. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in PMLR 117:890-905

Related Material