Learning and Testing Irreducible Markov Chains via the $k$-Cover Time

Siu On Chan, Qinghua Ding, Sing Hei Li
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:458-480, 2021.

Abstract

We give a unified way of testing and learning finite Markov chains from a single Markovian trajectory, using the idea of $k$-cover time introduced here. The $k$-cover time is the expected length of a random walk to cover every state at least $k$ times. This generalizes the notion of cover time in the literature. The error metric in the testing and learning problems is the infinity matrix norm between the transition matrices, as considered by Wolfer and Kontorovich. Specifically, we show that if we can learn or test discrete distributions using $k$ samples, then we can learn or test Markov chains using a number of samples equal to the $k$-cover time of the chain, up to constant factors. We then derive asymptotic bounds on the $k$-cover time in terms of the number of states, minimum stationary probability and the cover time of the chain. Our bounds are tight for reversible Markov chains and almost tight (up to logarithmic factors) for irreducible ones. Our results on $k$-cover time yield sample complexity bounds for a wider range of learning and testing tasks (including learning, uniformity testing, identity testing, closeness testing and their tolerant versions) over Markov chains, and can be applied to a broader family of Markov chains (irreducible and reversible ones) than previous results which only applies to ergodic ones.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-chan21a, title = {Learning and Testing Irreducible {M}arkov Chains via the $k$-Cover Time}, author = {Chan, Siu On and Ding, Qinghua and Li, Sing Hei}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {458--480}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/chan21a/chan21a.pdf}, url = {https://proceedings.mlr.press/v132/chan21a.html}, abstract = {We give a unified way of testing and learning finite Markov chains from a single Markovian trajectory, using the idea of $k$-cover time introduced here. The $k$-cover time is the expected length of a random walk to cover every state at least $k$ times. This generalizes the notion of cover time in the literature. The error metric in the testing and learning problems is the infinity matrix norm between the transition matrices, as considered by Wolfer and Kontorovich. Specifically, we show that if we can learn or test discrete distributions using $k$ samples, then we can learn or test Markov chains using a number of samples equal to the $k$-cover time of the chain, up to constant factors. We then derive asymptotic bounds on the $k$-cover time in terms of the number of states, minimum stationary probability and the cover time of the chain. Our bounds are tight for reversible Markov chains and almost tight (up to logarithmic factors) for irreducible ones. Our results on $k$-cover time yield sample complexity bounds for a wider range of learning and testing tasks (including learning, uniformity testing, identity testing, closeness testing and their tolerant versions) over Markov chains, and can be applied to a broader family of Markov chains (irreducible and reversible ones) than previous results which only applies to ergodic ones.} }
Endnote
%0 Conference Paper %T Learning and Testing Irreducible Markov Chains via the $k$-Cover Time %A Siu On Chan %A Qinghua Ding %A Sing Hei Li %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-chan21a %I PMLR %P 458--480 %U https://proceedings.mlr.press/v132/chan21a.html %V 132 %X We give a unified way of testing and learning finite Markov chains from a single Markovian trajectory, using the idea of $k$-cover time introduced here. The $k$-cover time is the expected length of a random walk to cover every state at least $k$ times. This generalizes the notion of cover time in the literature. The error metric in the testing and learning problems is the infinity matrix norm between the transition matrices, as considered by Wolfer and Kontorovich. Specifically, we show that if we can learn or test discrete distributions using $k$ samples, then we can learn or test Markov chains using a number of samples equal to the $k$-cover time of the chain, up to constant factors. We then derive asymptotic bounds on the $k$-cover time in terms of the number of states, minimum stationary probability and the cover time of the chain. Our bounds are tight for reversible Markov chains and almost tight (up to logarithmic factors) for irreducible ones. Our results on $k$-cover time yield sample complexity bounds for a wider range of learning and testing tasks (including learning, uniformity testing, identity testing, closeness testing and their tolerant versions) over Markov chains, and can be applied to a broader family of Markov chains (irreducible and reversible ones) than previous results which only applies to ergodic ones.
APA
Chan, S.O., Ding, Q. & Li, S.H.. (2021). Learning and Testing Irreducible Markov Chains via the $k$-Cover Time. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:458-480 Available from https://proceedings.mlr.press/v132/chan21a.html.

Related Material