SCOT Approximation, Training and Asymptotic Inference
[edit]
Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, PMLR 60:241265, 2017.
Abstract
Approximation of stationary strongly mixing processes by Stochastic Context Trees (SCOT) models
and the Le CamHajekIbragimovKhasminsky locally minimax theory of statistical inference for them is outlined.
SCOT is an $m$Markov model with sparse memory structure.
In our previous papers we proved SCOT equivalence to 1MC with state space—alphabet consisting of the SCOT contexts.
For the fixed alphabet size and growing sample size, the Local Asymptotic Normality is proved and applied for establishing asymptotically optimal inference.
We outline what obstacles arise for a large SCOT alphabet size and not necessarily vast sample size.
Training SCOT on a large string using clusters of computers and statistical applications are described.
Related Material


