A second-order bound with excess losses

Pierre Gaillard, Gilles Stoltz, Tim van Erven
Proceedings of The 27th Conference on Learning Theory, PMLR 35:176-196, 2014.

Abstract

We study online aggregation of the predictions of experts, and first show new second-order regret bounds in the standard setting, which are obtained via a version of the Prod algorithm (and also a version of the polynomially weighted average algorithm) with multiple learning rates. These bounds are in terms of excess losses, the differences between the instantaneous losses suffered by the algorithm and the ones of a given expert. We then demonstrate the interest of these bounds in the context of experts that report their confidences as a number in the interval [0,1] using a generic reduction to the standard setting. We conclude by two other applications in the standard setting, which improve the known bounds in case of small excess losses and show a bounded regret against i.i.d. sequences of losses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-gaillard14, title = {A second-order bound with excess losses}, author = {Gaillard, Pierre and Stoltz, Gilles and van Erven, Tim}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {176--196}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/gaillard14.pdf}, url = {https://proceedings.mlr.press/v35/gaillard14.html}, abstract = {We study online aggregation of the predictions of experts, and first show new second-order regret bounds in the standard setting, which are obtained via a version of the Prod algorithm (and also a version of the polynomially weighted average algorithm) with multiple learning rates. These bounds are in terms of excess losses, the differences between the instantaneous losses suffered by the algorithm and the ones of a given expert. We then demonstrate the interest of these bounds in the context of experts that report their confidences as a number in the interval [0,1] using a generic reduction to the standard setting. We conclude by two other applications in the standard setting, which improve the known bounds in case of small excess losses and show a bounded regret against i.i.d. sequences of losses.} }
Endnote
%0 Conference Paper %T A second-order bound with excess losses %A Pierre Gaillard %A Gilles Stoltz %A Tim van Erven %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-gaillard14 %I PMLR %P 176--196 %U https://proceedings.mlr.press/v35/gaillard14.html %V 35 %X We study online aggregation of the predictions of experts, and first show new second-order regret bounds in the standard setting, which are obtained via a version of the Prod algorithm (and also a version of the polynomially weighted average algorithm) with multiple learning rates. These bounds are in terms of excess losses, the differences between the instantaneous losses suffered by the algorithm and the ones of a given expert. We then demonstrate the interest of these bounds in the context of experts that report their confidences as a number in the interval [0,1] using a generic reduction to the standard setting. We conclude by two other applications in the standard setting, which improve the known bounds in case of small excess losses and show a bounded regret against i.i.d. sequences of losses.
RIS
TY - CPAPER TI - A second-order bound with excess losses AU - Pierre Gaillard AU - Gilles Stoltz AU - Tim van Erven BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-gaillard14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 176 EP - 196 L1 - http://proceedings.mlr.press/v35/gaillard14.pdf UR - https://proceedings.mlr.press/v35/gaillard14.html AB - We study online aggregation of the predictions of experts, and first show new second-order regret bounds in the standard setting, which are obtained via a version of the Prod algorithm (and also a version of the polynomially weighted average algorithm) with multiple learning rates. These bounds are in terms of excess losses, the differences between the instantaneous losses suffered by the algorithm and the ones of a given expert. We then demonstrate the interest of these bounds in the context of experts that report their confidences as a number in the interval [0,1] using a generic reduction to the standard setting. We conclude by two other applications in the standard setting, which improve the known bounds in case of small excess losses and show a bounded regret against i.i.d. sequences of losses. ER -
APA
Gaillard, P., Stoltz, G. & van Erven, T.. (2014). A second-order bound with excess losses. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:176-196 Available from https://proceedings.mlr.press/v35/gaillard14.html.

Related Material