Online Learning with Predictable Sequences

Alexander Rakhlin, Karthik Sridharan
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:993-1019, 2013.

Abstract

We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known “predictable process”, the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding \emphprior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds can be seen as particular examples of online learning with simple predictable sequences.We further extend our methods to include competing with a set of possible predictable processes (models), that is “learning” the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback.

Cite this Paper


BibTeX
@InProceedings{pmlr-v30-Rakhlin13, title = {Online Learning with Predictable Sequences}, author = {Rakhlin, Alexander and Sridharan, Karthik}, booktitle = {Proceedings of the 26th Annual Conference on Learning Theory}, pages = {993--1019}, year = {2013}, editor = {Shalev-Shwartz, Shai and Steinwart, Ingo}, volume = {30}, series = {Proceedings of Machine Learning Research}, address = {Princeton, NJ, USA}, month = {12--14 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v30/Rakhlin13.pdf}, url = {https://proceedings.mlr.press/v30/Rakhlin13.html}, abstract = {We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known “predictable process”, the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding \emphprior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds can be seen as particular examples of online learning with simple predictable sequences.We further extend our methods to include competing with a set of possible predictable processes (models), that is “learning” the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback. } }
Endnote
%0 Conference Paper %T Online Learning with Predictable Sequences %A Alexander Rakhlin %A Karthik Sridharan %B Proceedings of the 26th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2013 %E Shai Shalev-Shwartz %E Ingo Steinwart %F pmlr-v30-Rakhlin13 %I PMLR %P 993--1019 %U https://proceedings.mlr.press/v30/Rakhlin13.html %V 30 %X We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known “predictable process”, the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding \emphprior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds can be seen as particular examples of online learning with simple predictable sequences.We further extend our methods to include competing with a set of possible predictable processes (models), that is “learning” the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback.
RIS
TY - CPAPER TI - Online Learning with Predictable Sequences AU - Alexander Rakhlin AU - Karthik Sridharan BT - Proceedings of the 26th Annual Conference on Learning Theory DA - 2013/06/13 ED - Shai Shalev-Shwartz ED - Ingo Steinwart ID - pmlr-v30-Rakhlin13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 30 SP - 993 EP - 1019 L1 - http://proceedings.mlr.press/v30/Rakhlin13.pdf UR - https://proceedings.mlr.press/v30/Rakhlin13.html AB - We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known “predictable process”, the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding \emphprior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds can be seen as particular examples of online learning with simple predictable sequences.We further extend our methods to include competing with a set of possible predictable processes (models), that is “learning” the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback. ER -
APA
Rakhlin, A. & Sridharan, K.. (2013). Online Learning with Predictable Sequences. Proceedings of the 26th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 30:993-1019 Available from https://proceedings.mlr.press/v30/Rakhlin13.html.

Related Material