Parameter-free, Dynamic, and Strongly-Adaptive Online Learning

Ashok Cutkosky
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2250-2259, 2020.

Abstract

We provide a new online learning algorithm that for the first time combines several disparate notions of adaptivity. First, our algorithm obtains a “parameter-free” regret bound that adapts to the norm of the comparator and the squared norm of the size of the gradients it observes. Second, it obtains a “strongly-adaptive” regret bound, so that for any given interval of length $N$, the regret over the interval is $\tilde O(\sqrt{N})$. Finally, our algorithm obtains an optimal “dynamic” regret bound: for any sequence of comparators with path-length $P$, our algorithm obtains regret $\tilde O(\sqrt{PN})$ over intervals of length $N$. Our primary technique for achieving these goals is a new method of combining constrained online learning regret bounds that does not rely on an expert meta-algorithm to aggregate learners.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-cutkosky20a, title = {Parameter-free, Dynamic, and Strongly-Adaptive Online Learning}, author = {Cutkosky, Ashok}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2250--2259}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/cutkosky20a/cutkosky20a.pdf}, url = {https://proceedings.mlr.press/v119/cutkosky20a.html}, abstract = {We provide a new online learning algorithm that for the first time combines several disparate notions of adaptivity. First, our algorithm obtains a “parameter-free” regret bound that adapts to the norm of the comparator and the squared norm of the size of the gradients it observes. Second, it obtains a “strongly-adaptive” regret bound, so that for any given interval of length $N$, the regret over the interval is $\tilde O(\sqrt{N})$. Finally, our algorithm obtains an optimal “dynamic” regret bound: for any sequence of comparators with path-length $P$, our algorithm obtains regret $\tilde O(\sqrt{PN})$ over intervals of length $N$. Our primary technique for achieving these goals is a new method of combining constrained online learning regret bounds that does not rely on an expert meta-algorithm to aggregate learners.} }
Endnote
%0 Conference Paper %T Parameter-free, Dynamic, and Strongly-Adaptive Online Learning %A Ashok Cutkosky %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-cutkosky20a %I PMLR %P 2250--2259 %U https://proceedings.mlr.press/v119/cutkosky20a.html %V 119 %X We provide a new online learning algorithm that for the first time combines several disparate notions of adaptivity. First, our algorithm obtains a “parameter-free” regret bound that adapts to the norm of the comparator and the squared norm of the size of the gradients it observes. Second, it obtains a “strongly-adaptive” regret bound, so that for any given interval of length $N$, the regret over the interval is $\tilde O(\sqrt{N})$. Finally, our algorithm obtains an optimal “dynamic” regret bound: for any sequence of comparators with path-length $P$, our algorithm obtains regret $\tilde O(\sqrt{PN})$ over intervals of length $N$. Our primary technique for achieving these goals is a new method of combining constrained online learning regret bounds that does not rely on an expert meta-algorithm to aggregate learners.
APA
Cutkosky, A.. (2020). Parameter-free, Dynamic, and Strongly-Adaptive Online Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2250-2259 Available from https://proceedings.mlr.press/v119/cutkosky20a.html.

Related Material