Beyond Worst-Case Online Classification: VC-Based Regret Bounds for Relaxed Benchmarks

Omar Montasser, Abhishek Shetty, Nikita Zhivotovskiy
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:4168-4202, 2025.

Abstract

We revisit online binary classification by shifting the focus from competing with the best-in-class binary loss to competing against relaxed benchmarks that capture smoothed notions of optimality. Instead of measuring regret relative to the exact minimal binary error–a standard approach that leads to worst-case bounds tied to the Littlestone dimension—we consider comparing with predictors that are robust to small input perturbations, perform well under Gaussian smoothing, or maintain a prescribed output margin. Previous examples of this were primarily limited to the hinge loss. Our algorithms achieve regret guarantees that depend only on the VC dimension and the complexity of the instance space (e.g., metric entropy), and notably, they incur only an $O(\log(1/\gamma))$ dependence on the generalized margin $\gamma$. This stands in contrast to most existing regret bounds, which typically exhibit a polynomial dependence on $1/\gamma$. We complement this with matching lower bounds. Our analysis connects recent ideas from adversarial robustness and smoothed online learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-montasser25a, title = {Beyond Worst-Case Online Classification: VC-Based Regret Bounds for Relaxed Benchmarks}, author = {Montasser, Omar and Shetty, Abhishek and Zhivotovskiy, Nikita}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {4168--4202}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/montasser25a/montasser25a.pdf}, url = {https://proceedings.mlr.press/v291/montasser25a.html}, abstract = {We revisit online binary classification by shifting the focus from competing with the best-in-class binary loss to competing against relaxed benchmarks that capture smoothed notions of optimality. Instead of measuring regret relative to the exact minimal binary error–a standard approach that leads to worst-case bounds tied to the Littlestone dimension—we consider comparing with predictors that are robust to small input perturbations, perform well under Gaussian smoothing, or maintain a prescribed output margin. Previous examples of this were primarily limited to the hinge loss. Our algorithms achieve regret guarantees that depend only on the VC dimension and the complexity of the instance space (e.g., metric entropy), and notably, they incur only an $O(\log(1/\gamma))$ dependence on the generalized margin $\gamma$. This stands in contrast to most existing regret bounds, which typically exhibit a polynomial dependence on $1/\gamma$. We complement this with matching lower bounds. Our analysis connects recent ideas from adversarial robustness and smoothed online learning.} }
Endnote
%0 Conference Paper %T Beyond Worst-Case Online Classification: VC-Based Regret Bounds for Relaxed Benchmarks %A Omar Montasser %A Abhishek Shetty %A Nikita Zhivotovskiy %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-montasser25a %I PMLR %P 4168--4202 %U https://proceedings.mlr.press/v291/montasser25a.html %V 291 %X We revisit online binary classification by shifting the focus from competing with the best-in-class binary loss to competing against relaxed benchmarks that capture smoothed notions of optimality. Instead of measuring regret relative to the exact minimal binary error–a standard approach that leads to worst-case bounds tied to the Littlestone dimension—we consider comparing with predictors that are robust to small input perturbations, perform well under Gaussian smoothing, or maintain a prescribed output margin. Previous examples of this were primarily limited to the hinge loss. Our algorithms achieve regret guarantees that depend only on the VC dimension and the complexity of the instance space (e.g., metric entropy), and notably, they incur only an $O(\log(1/\gamma))$ dependence on the generalized margin $\gamma$. This stands in contrast to most existing regret bounds, which typically exhibit a polynomial dependence on $1/\gamma$. We complement this with matching lower bounds. Our analysis connects recent ideas from adversarial robustness and smoothed online learning.
APA
Montasser, O., Shetty, A. & Zhivotovskiy, N.. (2025). Beyond Worst-Case Online Classification: VC-Based Regret Bounds for Relaxed Benchmarks. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:4168-4202 Available from https://proceedings.mlr.press/v291/montasser25a.html.

Related Material