Implicit Parameter-free Online Learning with Truncated Linear Models

Keyi Chen, Ashok Cutkosky, Francesco Orabona
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:148-175, 2022.

Abstract

Parameter-free algorithms are online learning algorithms that do not require setting learning rates. They achieve optimal regret with respect to the distance between the initial point and any competitor. Yet, parameter-free algorithms do not take into account the geometry of the losses. Recently, in the stochastic optimization literature, it has been proposed to instead use truncated linear lower bounds, which produce better performance by more closely modeling the losses. In particular, truncated linear models greatly reduce the problem of overshooting the minimum of the loss function. Unfortunately, truncated linear models cannot be used with parameter-free algorithms because the updates become very expensive to compute. In this paper, we propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an “implicit” flavor. Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties. We also conduct an empirical study demonstrating the practical utility of our algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-chen22a, title = {Implicit Parameter-free Online Learning with Truncated Linear Models}, author = {Chen, Keyi and Cutkosky, Ashok and Orabona, Francesco}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {148--175}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/chen22a/chen22a.pdf}, url = {https://proceedings.mlr.press/v167/chen22a.html}, abstract = {Parameter-free algorithms are online learning algorithms that do not require setting learning rates. They achieve optimal regret with respect to the distance between the initial point and any competitor. Yet, parameter-free algorithms do not take into account the geometry of the losses. Recently, in the stochastic optimization literature, it has been proposed to instead use truncated linear lower bounds, which produce better performance by more closely modeling the losses. In particular, truncated linear models greatly reduce the problem of overshooting the minimum of the loss function. Unfortunately, truncated linear models cannot be used with parameter-free algorithms because the updates become very expensive to compute. In this paper, we propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an “implicit” flavor. Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties. We also conduct an empirical study demonstrating the practical utility of our algorithms.} }
Endnote
%0 Conference Paper %T Implicit Parameter-free Online Learning with Truncated Linear Models %A Keyi Chen %A Ashok Cutkosky %A Francesco Orabona %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-chen22a %I PMLR %P 148--175 %U https://proceedings.mlr.press/v167/chen22a.html %V 167 %X Parameter-free algorithms are online learning algorithms that do not require setting learning rates. They achieve optimal regret with respect to the distance between the initial point and any competitor. Yet, parameter-free algorithms do not take into account the geometry of the losses. Recently, in the stochastic optimization literature, it has been proposed to instead use truncated linear lower bounds, which produce better performance by more closely modeling the losses. In particular, truncated linear models greatly reduce the problem of overshooting the minimum of the loss function. Unfortunately, truncated linear models cannot be used with parameter-free algorithms because the updates become very expensive to compute. In this paper, we propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an “implicit” flavor. Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties. We also conduct an empirical study demonstrating the practical utility of our algorithms.
APA
Chen, K., Cutkosky, A. & Orabona, F.. (2022). Implicit Parameter-free Online Learning with Truncated Linear Models. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:148-175 Available from https://proceedings.mlr.press/v167/chen22a.html.

Related Material