[edit]
SCOT Approximation, Training and Asymptotic Inference
Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, PMLR 60:241-265, 2017.
Abstract
Approximation of stationary strongly mixing processes by Stochastic Context Trees (SCOT) models
and the Le Cam-Hajek-Ibragimov-Khasminsky locally minimax theory of statistical inference for them is outlined.
SCOT is an $m$-Markov model with sparse memory structure.
In our previous papers we proved SCOT equivalence to 1-MC with state space—alphabet consisting of the SCOT contexts.
For the fixed alphabet size and growing sample size, the Local Asymptotic Normality is proved and applied for establishing asymptotically optimal inference.
We outline what obstacles arise for a large SCOT alphabet size and not necessarily vast sample size.
Training SCOT on a large string using clusters of computers and statistical applications are described.