Consistent Online Optimization: Convex and Submodular

Mohammad Reza Karimi Jaghargh, Andreas Krause, Silvio Lattanzi, Sergei Vassilvtiskii
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:2241-2250, 2019.

Abstract

Modern online learning algorithms achieve low (sublinear) regret in a variety of diverse settings. These algorithms, however, update their solution at every time step. While these updates are computationally efficient, the very requirement of frequent updates makes the algorithms untenable in some practical applications. In this work we develop online learning algorithms that update a sublinear number of times. We give a meta algorithm based on non-homogeneous Poisson Processes that gives a smooth trade-off between regret and frequency of updates. Empirically, we show that in many cases, we can significantly reduce updates at a minimal increase in regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-jaghargh19a, title = {Consistent Online Optimization: Convex and Submodular}, author = {Jaghargh, Mohammad Reza Karimi and Krause, Andreas and Lattanzi, Silvio and Vassilvtiskii, Sergei}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {2241--2250}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/jaghargh19a/jaghargh19a.pdf}, url = {https://proceedings.mlr.press/v89/jaghargh19a.html}, abstract = {Modern online learning algorithms achieve low (sublinear) regret in a variety of diverse settings. These algorithms, however, update their solution at every time step. While these updates are computationally efficient, the very requirement of frequent updates makes the algorithms untenable in some practical applications. In this work we develop online learning algorithms that update a sublinear number of times. We give a meta algorithm based on non-homogeneous Poisson Processes that gives a smooth trade-off between regret and frequency of updates. Empirically, we show that in many cases, we can significantly reduce updates at a minimal increase in regret.} }
Endnote
%0 Conference Paper %T Consistent Online Optimization: Convex and Submodular %A Mohammad Reza Karimi Jaghargh %A Andreas Krause %A Silvio Lattanzi %A Sergei Vassilvtiskii %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-jaghargh19a %I PMLR %P 2241--2250 %U https://proceedings.mlr.press/v89/jaghargh19a.html %V 89 %X Modern online learning algorithms achieve low (sublinear) regret in a variety of diverse settings. These algorithms, however, update their solution at every time step. While these updates are computationally efficient, the very requirement of frequent updates makes the algorithms untenable in some practical applications. In this work we develop online learning algorithms that update a sublinear number of times. We give a meta algorithm based on non-homogeneous Poisson Processes that gives a smooth trade-off between regret and frequency of updates. Empirically, we show that in many cases, we can significantly reduce updates at a minimal increase in regret.
APA
Jaghargh, M.R.K., Krause, A., Lattanzi, S. & Vassilvtiskii, S.. (2019). Consistent Online Optimization: Convex and Submodular. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:2241-2250 Available from https://proceedings.mlr.press/v89/jaghargh19a.html.

Related Material