Testing Symmetric Markov Chains From a Single Trajectory

Constantinos Daskalakis, Nishanth Dikkala, Nick Gravin
; Proceedings of the 31st Conference On Learning Theory, PMLR 75:385-409, 2018.

Abstract

The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory $X_0,\ldots,X_t,\ldots$ of an unknown, symmetric, and finite state Markov Chain $\cal M$. We do not control the starting state $X_0$, and we cannot restart the chain. Given our single trajectory, the goal is to test whether $\cal M$ is identical to a model Markov Chain ${\cal M}’$, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain ${\cal M}’$ is $\tilde{O}(n)$ in the size of the state space $n$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-daskalakis18a, title = {Testing Symmetric Markov Chains From a Single Trajectory}, author = {Daskalakis, Constantinos and Dikkala, Nishanth and Gravin, Nick}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {385--409}, year = {2018}, editor = {Sébastien Bubeck and Vianney Perchet and Philippe Rigollet}, volume = {75}, series = {Proceedings of Machine Learning Research}, address = {}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/daskalakis18a/daskalakis18a.pdf}, url = {http://proceedings.mlr.press/v75/daskalakis18a.html}, abstract = {The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory $X_0,\ldots,X_t,\ldots$ of an unknown, symmetric, and finite state Markov Chain $\cal M$. We do not control the starting state $X_0$, and we cannot restart the chain. Given our single trajectory, the goal is to test whether $\cal M$ is identical to a model Markov Chain ${\cal M}’$, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain ${\cal M}’$ is $\tilde{O}(n)$ in the size of the state space $n$. } }
Endnote
%0 Conference Paper %T Testing Symmetric Markov Chains From a Single Trajectory %A Constantinos Daskalakis %A Nishanth Dikkala %A Nick Gravin %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-daskalakis18a %I PMLR %J Proceedings of Machine Learning Research %P 385--409 %U http://proceedings.mlr.press %V 75 %W PMLR %X The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory $X_0,\ldots,X_t,\ldots$ of an unknown, symmetric, and finite state Markov Chain $\cal M$. We do not control the starting state $X_0$, and we cannot restart the chain. Given our single trajectory, the goal is to test whether $\cal M$ is identical to a model Markov Chain ${\cal M}’$, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain ${\cal M}’$ is $\tilde{O}(n)$ in the size of the state space $n$.
APA
Daskalakis, C., Dikkala, N. & Gravin, N.. (2018). Testing Symmetric Markov Chains From a Single Trajectory. Proceedings of the 31st Conference On Learning Theory, in PMLR 75:385-409

Related Material