Learning with Monotone Adversarial Corruptions

Kasper Green Larsen, Chirag Pabbaraju, Abhishek Shetty
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-18, 2026.

Abstract

We study the extent to which standard machine learning algorithms rely on exchangeability and independence of data by introducing a monotone adversarial corruption model. In this model, an adversary, upon looking at a "clean" i.i.d. dataset, inserts additional "corrupted" points of their choice into the dataset. These added points are constrained to be monotone corruptions, in that they get labeled according to the ground-truth target function. Perhaps surprisingly, we demonstrate that in this setting, all known optimal learning algorithms for binary classification can be made to achieve suboptimal expected error on a new independent test point drawn from the same distribution as the clean dataset. On the other hand, we show that uniform convergence-based algorithms do not degrade in their guarantees. Our results showcase how optimal learning algorithms break down in the face of seemingly helpful monotone corruptions, exposing their overreliance on exchangeability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-larsen26a, title = {Learning with Monotone Adversarial Corruptions}, author = {Larsen, Kasper Green and Pabbaraju, Chirag and Shetty, Abhishek}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--18}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/larsen26a/larsen26a.pdf}, url = {https://proceedings.mlr.press/v313/larsen26a.html}, abstract = {We study the extent to which standard machine learning algorithms rely on exchangeability and independence of data by introducing a monotone adversarial corruption model. In this model, an adversary, upon looking at a "clean" i.i.d. dataset, inserts additional "corrupted" points of their choice into the dataset. These added points are constrained to be monotone corruptions, in that they get labeled according to the ground-truth target function. Perhaps surprisingly, we demonstrate that in this setting, all known optimal learning algorithms for binary classification can be made to achieve suboptimal expected error on a new independent test point drawn from the same distribution as the clean dataset. On the other hand, we show that uniform convergence-based algorithms do not degrade in their guarantees. Our results showcase how optimal learning algorithms break down in the face of seemingly helpful monotone corruptions, exposing their overreliance on exchangeability.} }
Endnote
%0 Conference Paper %T Learning with Monotone Adversarial Corruptions %A Kasper Green Larsen %A Chirag Pabbaraju %A Abhishek Shetty %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-larsen26a %I PMLR %P 1--18 %U https://proceedings.mlr.press/v313/larsen26a.html %V 313 %X We study the extent to which standard machine learning algorithms rely on exchangeability and independence of data by introducing a monotone adversarial corruption model. In this model, an adversary, upon looking at a "clean" i.i.d. dataset, inserts additional "corrupted" points of their choice into the dataset. These added points are constrained to be monotone corruptions, in that they get labeled according to the ground-truth target function. Perhaps surprisingly, we demonstrate that in this setting, all known optimal learning algorithms for binary classification can be made to achieve suboptimal expected error on a new independent test point drawn from the same distribution as the clean dataset. On the other hand, we show that uniform convergence-based algorithms do not degrade in their guarantees. Our results showcase how optimal learning algorithms break down in the face of seemingly helpful monotone corruptions, exposing their overreliance on exchangeability.
APA
Larsen, K.G., Pabbaraju, C. & Shetty, A.. (2026). Learning with Monotone Adversarial Corruptions. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-18 Available from https://proceedings.mlr.press/v313/larsen26a.html.

Related Material