Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization

Guanghui Wang, Shiyin Lu, Lijun Zhang
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:659-668, 2020.

Abstract

In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions. Existing universal methods are limited in the sense that they are optimal for only a subclass of loss functions. To address this limitation, we propose a novel online algorithm, namely Maler, which enjoys the optimal $O(\sqrt{T})$, $O(d\log T)$ and $O(\log T)$ regret bounds for general convex, exponentially concave, and strongly convex functions respectively. The essential idea is to run multiple types of learning algorithms with different learning rates in parallel, and utilize a meta-algorithm to track the best on the fly. Empirical results demonstrate the effectiveness of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-wang20e, title = {Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization}, author = {Wang, Guanghui and Lu, Shiyin and Zhang, Lijun}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {659--668}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/wang20e/wang20e.pdf}, url = {https://proceedings.mlr.press/v115/wang20e.html}, abstract = {In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions. Existing universal methods are limited in the sense that they are optimal for only a subclass of loss functions. To address this limitation, we propose a novel online algorithm, namely Maler, which enjoys the optimal $O(\sqrt{T})$, $O(d\log T)$ and $O(\log T)$ regret bounds for general convex, exponentially concave, and strongly convex functions respectively. The essential idea is to run multiple types of learning algorithms with different learning rates in parallel, and utilize a meta-algorithm to track the best on the fly. Empirical results demonstrate the effectiveness of our method.} }
Endnote
%0 Conference Paper %T Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization %A Guanghui Wang %A Shiyin Lu %A Lijun Zhang %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-wang20e %I PMLR %P 659--668 %U https://proceedings.mlr.press/v115/wang20e.html %V 115 %X In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions. Existing universal methods are limited in the sense that they are optimal for only a subclass of loss functions. To address this limitation, we propose a novel online algorithm, namely Maler, which enjoys the optimal $O(\sqrt{T})$, $O(d\log T)$ and $O(\log T)$ regret bounds for general convex, exponentially concave, and strongly convex functions respectively. The essential idea is to run multiple types of learning algorithms with different learning rates in parallel, and utilize a meta-algorithm to track the best on the fly. Empirical results demonstrate the effectiveness of our method.
APA
Wang, G., Lu, S. & Zhang, L.. (2020). Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:659-668 Available from https://proceedings.mlr.press/v115/wang20e.html.

Related Material