Online Isotonic Regression

Wojciech Kotłowski, Wouter M. Koolen, Alan Malek
29th Annual Conference on Learning Theory, PMLR 49:1165-1189, 2016.

Abstract

We consider the online version of the isotonic regression problem. Given a set of linearly ordered points (e.g., on the real line), the learner must predict labels sequentially at adversarially chosen positions and is evaluated by her total squared loss compared against the best isotonic (non-decreasing) function in hindsight. We survey several standard online learning algorithms and show that none of them achieve the optimal regret exponent; in fact, most of them (including Online Gradient Descent, Follow the Leader and Exponential Weights) incur linear regret. We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has a regret bounded by O\big(T^1/3 \log^2/3(T)\big) and present a matching Ω(T^1/3) lower bound on regret. We provide a computationally efficient version of this algorithm. We also analyze the noise-free case, in which the revealed labels are isotonic, and show that the bound can be improved to O(\log T) or even to O(1) (when the labels are revealed in isotonic order). Finally, we extend the analysis beyond squared loss and give bounds for entropic loss and absolute loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-kotlowski16, title = {Online Isotonic Regression}, author = {Kotłowski, Wojciech and Koolen, Wouter M. and Malek, Alan}, booktitle = {29th Annual Conference on Learning Theory}, pages = {1165--1189}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/kotlowski16.pdf}, url = {https://proceedings.mlr.press/v49/kotlowski16.html}, abstract = {We consider the online version of the isotonic regression problem. Given a set of linearly ordered points (e.g., on the real line), the learner must predict labels sequentially at adversarially chosen positions and is evaluated by her total squared loss compared against the best isotonic (non-decreasing) function in hindsight. We survey several standard online learning algorithms and show that none of them achieve the optimal regret exponent; in fact, most of them (including Online Gradient Descent, Follow the Leader and Exponential Weights) incur linear regret. We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has a regret bounded by O\big(T^1/3 \log^2/3(T)\big) and present a matching Ω(T^1/3) lower bound on regret. We provide a computationally efficient version of this algorithm. We also analyze the noise-free case, in which the revealed labels are isotonic, and show that the bound can be improved to O(\log T) or even to O(1) (when the labels are revealed in isotonic order). Finally, we extend the analysis beyond squared loss and give bounds for entropic loss and absolute loss.} }
Endnote
%0 Conference Paper %T Online Isotonic Regression %A Wojciech Kotłowski %A Wouter M. Koolen %A Alan Malek %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-kotlowski16 %I PMLR %P 1165--1189 %U https://proceedings.mlr.press/v49/kotlowski16.html %V 49 %X We consider the online version of the isotonic regression problem. Given a set of linearly ordered points (e.g., on the real line), the learner must predict labels sequentially at adversarially chosen positions and is evaluated by her total squared loss compared against the best isotonic (non-decreasing) function in hindsight. We survey several standard online learning algorithms and show that none of them achieve the optimal regret exponent; in fact, most of them (including Online Gradient Descent, Follow the Leader and Exponential Weights) incur linear regret. We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has a regret bounded by O\big(T^1/3 \log^2/3(T)\big) and present a matching Ω(T^1/3) lower bound on regret. We provide a computationally efficient version of this algorithm. We also analyze the noise-free case, in which the revealed labels are isotonic, and show that the bound can be improved to O(\log T) or even to O(1) (when the labels are revealed in isotonic order). Finally, we extend the analysis beyond squared loss and give bounds for entropic loss and absolute loss.
RIS
TY - CPAPER TI - Online Isotonic Regression AU - Wojciech Kotłowski AU - Wouter M. Koolen AU - Alan Malek BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-kotlowski16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 1165 EP - 1189 L1 - http://proceedings.mlr.press/v49/kotlowski16.pdf UR - https://proceedings.mlr.press/v49/kotlowski16.html AB - We consider the online version of the isotonic regression problem. Given a set of linearly ordered points (e.g., on the real line), the learner must predict labels sequentially at adversarially chosen positions and is evaluated by her total squared loss compared against the best isotonic (non-decreasing) function in hindsight. We survey several standard online learning algorithms and show that none of them achieve the optimal regret exponent; in fact, most of them (including Online Gradient Descent, Follow the Leader and Exponential Weights) incur linear regret. We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has a regret bounded by O\big(T^1/3 \log^2/3(T)\big) and present a matching Ω(T^1/3) lower bound on regret. We provide a computationally efficient version of this algorithm. We also analyze the noise-free case, in which the revealed labels are isotonic, and show that the bound can be improved to O(\log T) or even to O(1) (when the labels are revealed in isotonic order). Finally, we extend the analysis beyond squared loss and give bounds for entropic loss and absolute loss. ER -
APA
Kotłowski, W., Koolen, W.M. & Malek, A.. (2016). Online Isotonic Regression. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:1165-1189 Available from https://proceedings.mlr.press/v49/kotlowski16.html.

Related Material