Skip Context Tree Switching

Marc Bellemare, Joel Veness, Erik Talvitie
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1458-1466, 2014.

Abstract

Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-bellemare14, title = {Skip Context Tree Switching}, author = {Bellemare, Marc and Veness, Joel and Talvitie, Erik}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1458--1466}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/bellemare14.pdf}, url = {https://proceedings.mlr.press/v32/bellemare14.html}, abstract = {Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks.} }
Endnote
%0 Conference Paper %T Skip Context Tree Switching %A Marc Bellemare %A Joel Veness %A Erik Talvitie %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-bellemare14 %I PMLR %P 1458--1466 %U https://proceedings.mlr.press/v32/bellemare14.html %V 32 %N 2 %X Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks.
RIS
TY - CPAPER TI - Skip Context Tree Switching AU - Marc Bellemare AU - Joel Veness AU - Erik Talvitie BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-bellemare14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1458 EP - 1466 L1 - http://proceedings.mlr.press/v32/bellemare14.pdf UR - https://proceedings.mlr.press/v32/bellemare14.html AB - Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks. ER -
APA
Bellemare, M., Veness, J. & Talvitie, E.. (2014). Skip Context Tree Switching. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1458-1466 Available from https://proceedings.mlr.press/v32/bellemare14.html.

Related Material