Skip Context Tree Switching

Marc Bellemare, Joel Veness, Erik Talvitie
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1458-1466, 2014.

Abstract

Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-bellemare14, title = {Skip Context Tree Switching}, author = {Marc Bellemare and Joel Veness and Erik Talvitie}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1458--1466}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/bellemare14.pdf}, url = {http://proceedings.mlr.press/v32/bellemare14.html}, abstract = {Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks.} }
Endnote
%0 Conference Paper %T Skip Context Tree Switching %A Marc Bellemare %A Joel Veness %A Erik Talvitie %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-bellemare14 %I PMLR %J Proceedings of Machine Learning Research %P 1458--1466 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks.
RIS
TY - CPAPER TI - Skip Context Tree Switching AU - Marc Bellemare AU - Joel Veness AU - Erik Talvitie BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-bellemare14 PB - PMLR SP - 1458 DP - PMLR EP - 1466 L1 - http://proceedings.mlr.press/v32/bellemare14.pdf UR - http://proceedings.mlr.press/v32/bellemare14.html AB - Context Tree Weighting (CTW) is a powerful probabilistic sequence prediction technique that efficiently performs Bayesian model averaging over the class of all prediction suffix trees of bounded depth. In this paper we show how to generalize this technique to the class of K-skip prediction suffix trees. Contrary to regular prediction suffix trees, K-skip prediction suffix trees are permitted to ignore up to K contiguous portions of the context. This allows for significant improvements in predictive accuracy when irrelevant variables are present, a case which often occurs within record-aligned data and images. We provide a regret-based analysis of our approach, and empirically evaluate it on the Calgary corpus and a set of Atari 2600 screen prediction tasks. ER -
APA
Bellemare, M., Veness, J. & Talvitie, E.. (2014). Skip Context Tree Switching. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):1458-1466

Related Material