Robust Online Convex Optimization in the Presence of Outliers

Tim van Erven, Sarah Sachs, Wouter M Koolen, Wojciech Kotlowski
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:4174-4194, 2021.

Abstract

We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learner is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-vanerven21a, title = {Robust Online Convex Optimization in the Presence of Outliers}, author = {van Erven, Tim and Sachs, Sarah and Koolen, Wouter M and Kotlowski, Wojciech}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {4174--4194}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/vanerven21a/vanerven21a.pdf}, url = {https://proceedings.mlr.press/v134/vanerven21a.html}, abstract = {We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learner is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear.} }
Endnote
%0 Conference Paper %T Robust Online Convex Optimization in the Presence of Outliers %A Tim van Erven %A Sarah Sachs %A Wouter M Koolen %A Wojciech Kotlowski %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-vanerven21a %I PMLR %P 4174--4194 %U https://proceedings.mlr.press/v134/vanerven21a.html %V 134 %X We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learner is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear.
APA
van Erven, T., Sachs, S., Koolen, W.M. & Kotlowski, W.. (2021). Robust Online Convex Optimization in the Presence of Outliers. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:4174-4194 Available from https://proceedings.mlr.press/v134/vanerven21a.html.

Related Material