[edit]
Testing Symmetric Markov Chains From a Single Trajectory
Proceedings of the 31st Conference On Learning Theory, PMLR 75:385-409, 2018.
Abstract
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory $X_0,\ldots,X_t,\ldots$ of an unknown, symmetric, and finite state Markov Chain $\cal M$. We do not control the starting state $X_0$, and we cannot restart the chain. Given our single trajectory, the goal is to test whether $\cal M$ is identical to a model Markov Chain ${\cal M}’$, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain ${\cal M}’$ is $\tilde{O}(n)$ in the size of the state space $n$.