Robust Online Learning

Sajad Ashkezari
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-14, 2026.

Abstract

We study the problem of learning robust classifiers where the classifier will receive a perturbed input and the clean data and its label are also adversarially chosen. We formulate this problem as an online learning problem and consider both the realizable and agnostic learnability of hypothesis classes. We define a new dimension of classes and show it controls the mistake bounds in the realizable setting and the regret bounds in the agnostic setting. In contrast to the dimension that characterizes learnability in the PAC setting, our dimension is rather simple and resembles the Littlestone dimension. We generalize our dimension to multiclass hypothesis classes and prove similar results in the realizable case. Finally, we study the case where the learner does not know the set of allowed perturbations for each point and only has some prior on them.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-ashkezari26a, title = {Robust Online Learning}, author = {Ashkezari, Sajad}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--14}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/ashkezari26a/ashkezari26a.pdf}, url = {https://proceedings.mlr.press/v313/ashkezari26a.html}, abstract = {We study the problem of learning robust classifiers where the classifier will receive a perturbed input and the clean data and its label are also adversarially chosen. We formulate this problem as an online learning problem and consider both the realizable and agnostic learnability of hypothesis classes. We define a new dimension of classes and show it controls the mistake bounds in the realizable setting and the regret bounds in the agnostic setting. In contrast to the dimension that characterizes learnability in the PAC setting, our dimension is rather simple and resembles the Littlestone dimension. We generalize our dimension to multiclass hypothesis classes and prove similar results in the realizable case. Finally, we study the case where the learner does not know the set of allowed perturbations for each point and only has some prior on them.} }
Endnote
%0 Conference Paper %T Robust Online Learning %A Sajad Ashkezari %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-ashkezari26a %I PMLR %P 1--14 %U https://proceedings.mlr.press/v313/ashkezari26a.html %V 313 %X We study the problem of learning robust classifiers where the classifier will receive a perturbed input and the clean data and its label are also adversarially chosen. We formulate this problem as an online learning problem and consider both the realizable and agnostic learnability of hypothesis classes. We define a new dimension of classes and show it controls the mistake bounds in the realizable setting and the regret bounds in the agnostic setting. In contrast to the dimension that characterizes learnability in the PAC setting, our dimension is rather simple and resembles the Littlestone dimension. We generalize our dimension to multiclass hypothesis classes and prove similar results in the realizable case. Finally, we study the case where the learner does not know the set of allowed perturbations for each point and only has some prior on them.
APA
Ashkezari, S.. (2026). Robust Online Learning. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-14 Available from https://proceedings.mlr.press/v313/ashkezari26a.html.

Related Material