Second-order Quantile Methods for Experts and Combinatorial Games

Wouter M. Koolen, Tim Van Erven
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1155-1175, 2015.

Abstract

We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of ‘easy data’, which may be paraphrased as “the learning problem has small variance” and “multiple decisions are useful”, are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. The difficulty in combining the two notions lies in tuning a parameter called the learning rate, whose optimal value behaves non-monotonically. We introduce a potential function for which (very surprisingly!) it is sufficient to simply put a prior on learning rates; an approach that does not work for any previous method. By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Koolen15a, title = {Second-order Quantile Methods for Experts and Combinatorial Games}, author = {Koolen, Wouter M. and Van Erven, Tim}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1155--1175}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Koolen15a.pdf}, url = {https://proceedings.mlr.press/v40/Koolen15a.html}, abstract = {We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of ‘easy data’, which may be paraphrased as “the learning problem has small variance” and “multiple decisions are useful”, are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. The difficulty in combining the two notions lies in tuning a parameter called the learning rate, whose optimal value behaves non-monotonically. We introduce a potential function for which (very surprisingly!) it is sufficient to simply put a prior on learning rates; an approach that does not work for any previous method. By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.} }
Endnote
%0 Conference Paper %T Second-order Quantile Methods for Experts and Combinatorial Games %A Wouter M. Koolen %A Tim Van Erven %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Koolen15a %I PMLR %P 1155--1175 %U https://proceedings.mlr.press/v40/Koolen15a.html %V 40 %X We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of ‘easy data’, which may be paraphrased as “the learning problem has small variance” and “multiple decisions are useful”, are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. The difficulty in combining the two notions lies in tuning a parameter called the learning rate, whose optimal value behaves non-monotonically. We introduce a potential function for which (very surprisingly!) it is sufficient to simply put a prior on learning rates; an approach that does not work for any previous method. By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.
RIS
TY - CPAPER TI - Second-order Quantile Methods for Experts and Combinatorial Games AU - Wouter M. Koolen AU - Tim Van Erven BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Koolen15a PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 1155 EP - 1175 L1 - http://proceedings.mlr.press/v40/Koolen15a.pdf UR - https://proceedings.mlr.press/v40/Koolen15a.html AB - We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of ‘easy data’, which may be paraphrased as “the learning problem has small variance” and “multiple decisions are useful”, are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. The difficulty in combining the two notions lies in tuning a parameter called the learning rate, whose optimal value behaves non-monotonically. We introduce a potential function for which (very surprisingly!) it is sufficient to simply put a prior on learning rates; an approach that does not work for any previous method. By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles. ER -
APA
Koolen, W.M. & Van Erven, T.. (2015). Second-order Quantile Methods for Experts and Combinatorial Games. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:1155-1175 Available from https://proceedings.mlr.press/v40/Koolen15a.html.

Related Material