Combining Online Learning Guarantees

Ashok Cutkosky
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:895-913, 2019.

Abstract

We show how to take any two parameter-free online learning algorithms with different regret guarantees and obtain a single algorithm whose regret is the minimum of the two base algorithms. Our method is embarrassingly simple: just add the iterates. This trick can generate efficient algorithms that adapt to many norms simultaneously, as well as providing diagonal-style algorithms that still maintain dimension-free guarantees. We then proceed to show how a variant on this idea yields a black-box procedure for generating \emph{optimistic} online learning algorithms. This yields the first optimistic regret guarantees in the unconstrained setting and generically increases adaptivity. Further, our optimistic algorithms are guaranteed to do no worse than their non-optimistic counterparts regardless of the quality of the optimistic estimates provided to the algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-cutkosky19b, title = {Combining Online Learning Guarantees}, author = {Cutkosky, Ashok}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {895--913}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/cutkosky19b/cutkosky19b.pdf}, url = {https://proceedings.mlr.press/v99/cutkosky19b.html}, abstract = {We show how to take any two parameter-free online learning algorithms with different regret guarantees and obtain a single algorithm whose regret is the minimum of the two base algorithms. Our method is embarrassingly simple: just add the iterates. This trick can generate efficient algorithms that adapt to many norms simultaneously, as well as providing diagonal-style algorithms that still maintain dimension-free guarantees. We then proceed to show how a variant on this idea yields a black-box procedure for generating \emph{optimistic} online learning algorithms. This yields the first optimistic regret guarantees in the unconstrained setting and generically increases adaptivity. Further, our optimistic algorithms are guaranteed to do no worse than their non-optimistic counterparts regardless of the quality of the optimistic estimates provided to the algorithm.} }
Endnote
%0 Conference Paper %T Combining Online Learning Guarantees %A Ashok Cutkosky %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-cutkosky19b %I PMLR %P 895--913 %U https://proceedings.mlr.press/v99/cutkosky19b.html %V 99 %X We show how to take any two parameter-free online learning algorithms with different regret guarantees and obtain a single algorithm whose regret is the minimum of the two base algorithms. Our method is embarrassingly simple: just add the iterates. This trick can generate efficient algorithms that adapt to many norms simultaneously, as well as providing diagonal-style algorithms that still maintain dimension-free guarantees. We then proceed to show how a variant on this idea yields a black-box procedure for generating \emph{optimistic} online learning algorithms. This yields the first optimistic regret guarantees in the unconstrained setting and generically increases adaptivity. Further, our optimistic algorithms are guaranteed to do no worse than their non-optimistic counterparts regardless of the quality of the optimistic estimates provided to the algorithm.
APA
Cutkosky, A.. (2019). Combining Online Learning Guarantees. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:895-913 Available from https://proceedings.mlr.press/v99/cutkosky19b.html.

Related Material