Beyond Logarithmic Bounds in Online Learning

Francesco Orabona, Nicolo Cesa-Bianchi, Claudio Gentile
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:823-831, 2012.

Abstract

We prove logarithmic regret bounds that depend on the loss L_T^* of the competitor rather than on the number T of time steps. In the general online convex optimization setting, our bounds hold for any smooth and exp-concave loss (such as the square loss or the logistic loss). This bridges the gap between the O(ln T) regret exhibited by exp-concave losses and the O(sqrt(L_T^*)) regret exhibited by smooth losses. We also show that these bounds are tight for specific losses, thus they cannot be improved in general. For online regression with square loss, our analysis can be used to derive a sparse randomized variant of the online Newton step, whose expected number of updates scales with the algorithm’s loss. For online classification, we prove the first logarithmic mistake bounds that do not rely on prior knowledge of a bound on the competitor’s norm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-orabona12, title = {Beyond Logarithmic Bounds in Online Learning}, author = {Orabona, Francesco and Cesa-Bianchi, Nicolo and Gentile, Claudio}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {823--831}, year = {2012}, editor = {Lawrence, Neil D. and Girolami, Mark}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/orabona12/orabona12.pdf}, url = {https://proceedings.mlr.press/v22/orabona12.html}, abstract = {We prove logarithmic regret bounds that depend on the loss L_T^* of the competitor rather than on the number T of time steps. In the general online convex optimization setting, our bounds hold for any smooth and exp-concave loss (such as the square loss or the logistic loss). This bridges the gap between the O(ln T) regret exhibited by exp-concave losses and the O(sqrt(L_T^*)) regret exhibited by smooth losses. We also show that these bounds are tight for specific losses, thus they cannot be improved in general. For online regression with square loss, our analysis can be used to derive a sparse randomized variant of the online Newton step, whose expected number of updates scales with the algorithm’s loss. For online classification, we prove the first logarithmic mistake bounds that do not rely on prior knowledge of a bound on the competitor’s norm.} }
Endnote
%0 Conference Paper %T Beyond Logarithmic Bounds in Online Learning %A Francesco Orabona %A Nicolo Cesa-Bianchi %A Claudio Gentile %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-orabona12 %I PMLR %P 823--831 %U https://proceedings.mlr.press/v22/orabona12.html %V 22 %X We prove logarithmic regret bounds that depend on the loss L_T^* of the competitor rather than on the number T of time steps. In the general online convex optimization setting, our bounds hold for any smooth and exp-concave loss (such as the square loss or the logistic loss). This bridges the gap between the O(ln T) regret exhibited by exp-concave losses and the O(sqrt(L_T^*)) regret exhibited by smooth losses. We also show that these bounds are tight for specific losses, thus they cannot be improved in general. For online regression with square loss, our analysis can be used to derive a sparse randomized variant of the online Newton step, whose expected number of updates scales with the algorithm’s loss. For online classification, we prove the first logarithmic mistake bounds that do not rely on prior knowledge of a bound on the competitor’s norm.
RIS
TY - CPAPER TI - Beyond Logarithmic Bounds in Online Learning AU - Francesco Orabona AU - Nicolo Cesa-Bianchi AU - Claudio Gentile BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-orabona12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 22 SP - 823 EP - 831 L1 - http://proceedings.mlr.press/v22/orabona12/orabona12.pdf UR - https://proceedings.mlr.press/v22/orabona12.html AB - We prove logarithmic regret bounds that depend on the loss L_T^* of the competitor rather than on the number T of time steps. In the general online convex optimization setting, our bounds hold for any smooth and exp-concave loss (such as the square loss or the logistic loss). This bridges the gap between the O(ln T) regret exhibited by exp-concave losses and the O(sqrt(L_T^*)) regret exhibited by smooth losses. We also show that these bounds are tight for specific losses, thus they cannot be improved in general. For online regression with square loss, our analysis can be used to derive a sparse randomized variant of the online Newton step, whose expected number of updates scales with the algorithm’s loss. For online classification, we prove the first logarithmic mistake bounds that do not rely on prior knowledge of a bound on the competitor’s norm. ER -
APA
Orabona, F., Cesa-Bianchi, N. & Gentile, C.. (2012). Beyond Logarithmic Bounds in Online Learning. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 22:823-831 Available from https://proceedings.mlr.press/v22/orabona12.html.

Related Material