SCOT Approximation, Training and Asymptotic Inference

Mikhail Malyutov, Paul Grosu
Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, PMLR 60:241-265, 2017.

Abstract

Approximation of stationary strongly mixing processes by Stochastic Context Trees (SCOT) models and the Le Cam-Hajek-Ibragimov-Khasminsky locally minimax theory of statistical inference for them is outlined. SCOT is an $m$-Markov model with sparse memory structure. In our previous papers we proved SCOT equivalence to 1-MC with state space—alphabet consisting of the SCOT contexts. For the fixed alphabet size and growing sample size, the Local Asymptotic Normality is proved and applied for establishing asymptotically optimal inference. We outline what obstacles arise for a large SCOT alphabet size and not necessarily vast sample size. Training SCOT on a large string using clusters of computers and statistical applications are described.

Cite this Paper


BibTeX
@InProceedings{pmlr-v60-malyutov17a, title = {{SCOT} Approximation, Training and Asymptotic Inference}, author = {Malyutov, Mikhail and Grosu, Paul}, booktitle = {Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications}, pages = {241--265}, year = {2017}, editor = {Gammerman, Alex and Vovk, Vladimir and Luo, Zhiyuan and Papadopoulos, Harris}, volume = {60}, series = {Proceedings of Machine Learning Research}, month = {13--16 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v60/malyutov17a/malyutov17a.pdf}, url = {https://proceedings.mlr.press/v60/malyutov17a.html}, abstract = {Approximation of stationary strongly mixing processes by Stochastic Context Trees (SCOT) models and the Le Cam-Hajek-Ibragimov-Khasminsky locally minimax theory of statistical inference for them is outlined. SCOT is an $m$-Markov model with sparse memory structure. In our previous papers we proved SCOT equivalence to 1-MC with state space—alphabet consisting of the SCOT contexts. For the fixed alphabet size and growing sample size, the Local Asymptotic Normality is proved and applied for establishing asymptotically optimal inference. We outline what obstacles arise for a large SCOT alphabet size and not necessarily vast sample size. Training SCOT on a large string using clusters of computers and statistical applications are described.} }
Endnote
%0 Conference Paper %T SCOT Approximation, Training and Asymptotic Inference %A Mikhail Malyutov %A Paul Grosu %B Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications %C Proceedings of Machine Learning Research %D 2017 %E Alex Gammerman %E Vladimir Vovk %E Zhiyuan Luo %E Harris Papadopoulos %F pmlr-v60-malyutov17a %I PMLR %P 241--265 %U https://proceedings.mlr.press/v60/malyutov17a.html %V 60 %X Approximation of stationary strongly mixing processes by Stochastic Context Trees (SCOT) models and the Le Cam-Hajek-Ibragimov-Khasminsky locally minimax theory of statistical inference for them is outlined. SCOT is an $m$-Markov model with sparse memory structure. In our previous papers we proved SCOT equivalence to 1-MC with state space—alphabet consisting of the SCOT contexts. For the fixed alphabet size and growing sample size, the Local Asymptotic Normality is proved and applied for establishing asymptotically optimal inference. We outline what obstacles arise for a large SCOT alphabet size and not necessarily vast sample size. Training SCOT on a large string using clusters of computers and statistical applications are described.
APA
Malyutov, M. & Grosu, P.. (2017). SCOT Approximation, Training and Asymptotic Inference. Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research 60:241-265 Available from https://proceedings.mlr.press/v60/malyutov17a.html.

Related Material