Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

Ashok Cutkosky, Francesco Orabona
Proceedings of the 31st Conference On Learning Theory, PMLR 75:1493-1529, 2018.

Abstract

We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and sometimes even improving runtime. We reduce parameter-free online learning to online exp-concave optimization, we reduce optimization in a Banach space to one-dimensional optimization, and we reduce optimization over a constrained domain to unconstrained optimization. All of our reductions run as fast as online gradient descent. We use our new techniques to improve upon the previously best regret bounds for parameter-free learning, and do so for arbitrary norms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-cutkosky18a, title = {Black-Box Reductions for Parameter-free Online Learning in Banach Spaces}, author = {Cutkosky, Ashok and Orabona, Francesco}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {1493--1529}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/cutkosky18a/cutkosky18a.pdf}, url = {https://proceedings.mlr.press/v75/cutkosky18a.html}, abstract = {We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and sometimes even improving runtime. We reduce parameter-free online learning to online exp-concave optimization, we reduce optimization in a Banach space to one-dimensional optimization, and we reduce optimization over a constrained domain to unconstrained optimization. All of our reductions run as fast as online gradient descent. We use our new techniques to improve upon the previously best regret bounds for parameter-free learning, and do so for arbitrary norms.} }
Endnote
%0 Conference Paper %T Black-Box Reductions for Parameter-free Online Learning in Banach Spaces %A Ashok Cutkosky %A Francesco Orabona %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-cutkosky18a %I PMLR %P 1493--1529 %U https://proceedings.mlr.press/v75/cutkosky18a.html %V 75 %X We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and sometimes even improving runtime. We reduce parameter-free online learning to online exp-concave optimization, we reduce optimization in a Banach space to one-dimensional optimization, and we reduce optimization over a constrained domain to unconstrained optimization. All of our reductions run as fast as online gradient descent. We use our new techniques to improve upon the previously best regret bounds for parameter-free learning, and do so for arbitrary norms.
APA
Cutkosky, A. & Orabona, F.. (2018). Black-Box Reductions for Parameter-free Online Learning in Banach Spaces. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:1493-1529 Available from https://proceedings.mlr.press/v75/cutkosky18a.html.

Related Material