Optimal Dynamic Regret in Exp-Concave Online Learning

Dheeraj Baby, Yu-Xiang Wang
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:359-409, 2021.

Abstract

We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with \emph{exp-concave} losses. We show that whenever improper learning is allowed, a Strongly Adaptive online learner achieves the dynamic regret of $\tilde O^*(n^{1/3}C_n^{2/3} \vee 1)$ where $C_n$ is the \emph{total variation} (a.k.a. \emph{path length}) of the an arbitrary sequence of comparators that may not be known to the learner ahead of time. Achieving this rate was highly nontrivial even for square losses in 1D where the best known upper bound was $O(\sqrt{nC_n} \vee \log n)$ (Yuan and Lamperski, 2019). Our new proof techniques make elegant use of the intricate structures of the primal and dual variables imposed by the KKT conditions and could be of independent interest. Finally, we apply our results to the classical statistical problem of \emph{locally adaptive non-parametric regression} (Mammen, 1991; Donoho and Johnstone, 1998) and obtain a stronger and more flexible algorithm that do not require any statistical assumptions or any hyperparameter tuning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-baby21a, title = {Optimal Dynamic Regret in Exp-Concave Online Learning}, author = {Baby, Dheeraj and Wang, Yu-Xiang}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {359--409}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/baby21a/baby21a.pdf}, url = {https://proceedings.mlr.press/v134/baby21a.html}, abstract = {We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with \emph{exp-concave} losses. We show that whenever improper learning is allowed, a Strongly Adaptive online learner achieves the dynamic regret of $\tilde O^*(n^{1/3}C_n^{2/3} \vee 1)$ where $C_n$ is the \emph{total variation} (a.k.a. \emph{path length}) of the an arbitrary sequence of comparators that may not be known to the learner ahead of time. Achieving this rate was highly nontrivial even for square losses in 1D where the best known upper bound was $O(\sqrt{nC_n} \vee \log n)$ (Yuan and Lamperski, 2019). Our new proof techniques make elegant use of the intricate structures of the primal and dual variables imposed by the KKT conditions and could be of independent interest. Finally, we apply our results to the classical statistical problem of \emph{locally adaptive non-parametric regression} (Mammen, 1991; Donoho and Johnstone, 1998) and obtain a stronger and more flexible algorithm that do not require any statistical assumptions or any hyperparameter tuning.} }
Endnote
%0 Conference Paper %T Optimal Dynamic Regret in Exp-Concave Online Learning %A Dheeraj Baby %A Yu-Xiang Wang %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-baby21a %I PMLR %P 359--409 %U https://proceedings.mlr.press/v134/baby21a.html %V 134 %X We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with \emph{exp-concave} losses. We show that whenever improper learning is allowed, a Strongly Adaptive online learner achieves the dynamic regret of $\tilde O^*(n^{1/3}C_n^{2/3} \vee 1)$ where $C_n$ is the \emph{total variation} (a.k.a. \emph{path length}) of the an arbitrary sequence of comparators that may not be known to the learner ahead of time. Achieving this rate was highly nontrivial even for square losses in 1D where the best known upper bound was $O(\sqrt{nC_n} \vee \log n)$ (Yuan and Lamperski, 2019). Our new proof techniques make elegant use of the intricate structures of the primal and dual variables imposed by the KKT conditions and could be of independent interest. Finally, we apply our results to the classical statistical problem of \emph{locally adaptive non-parametric regression} (Mammen, 1991; Donoho and Johnstone, 1998) and obtain a stronger and more flexible algorithm that do not require any statistical assumptions or any hyperparameter tuning.
APA
Baby, D. & Wang, Y.. (2021). Optimal Dynamic Regret in Exp-Concave Online Learning. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:359-409 Available from https://proceedings.mlr.press/v134/baby21a.html.

Related Material