Simplifying Adversarially Robust PAC Learning With Tolerance

Hassan Ashtiani, Vinayak Pathak, Ruth Urner
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:147-168, 2025.

Abstract

Adversarially robust PAC learning has proved to be challenging, with the currently best known learners relying on improper methods based on intricate compression schemes, resulting in sample complexity exponential in the VC-dimension. A series of follow up work considered a slightly relaxed version of the problem called adversarially robust learning \emph{with tolerance} and achieved better sample complexity in terms of the VC-dimension. However, those algorithms were either improper and complex, or required additional assumptions on the hypothesis class $\mathcal{H}$. We prove, for the first time, the existence of a simpler learner that achieves a sample complexity linear in the VC-dimension without requiring additional assumptions on $\mathcal{H}$. Even though our learner is improper, it is “almost proper" in the sense that it outputs a hypothesis that is “similar" to a hypothesis in $\mathcal{H}$. We also use the ideas from our algorithm to construct a semi-supervised learner in the tolerant setting. This simple algorithm achieves comparable bounds to the previous (non-tolerant) semi-supervised algorithm, but avoids the use of intricate subroutines from previous works, and is “almost proper."

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-ashtiani25a, title = {Simplifying Adversarially Robust PAC Learning With Tolerance}, author = {Ashtiani, Hassan and Pathak, Vinayak and Urner, Ruth}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {147--168}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/ashtiani25a/ashtiani25a.pdf}, url = {https://proceedings.mlr.press/v291/ashtiani25a.html}, abstract = {Adversarially robust PAC learning has proved to be challenging, with the currently best known learners relying on improper methods based on intricate compression schemes, resulting in sample complexity exponential in the VC-dimension. A series of follow up work considered a slightly relaxed version of the problem called adversarially robust learning \emph{with tolerance} and achieved better sample complexity in terms of the VC-dimension. However, those algorithms were either improper and complex, or required additional assumptions on the hypothesis class $\mathcal{H}$. We prove, for the first time, the existence of a simpler learner that achieves a sample complexity linear in the VC-dimension without requiring additional assumptions on $\mathcal{H}$. Even though our learner is improper, it is “almost proper" in the sense that it outputs a hypothesis that is “similar" to a hypothesis in $\mathcal{H}$. We also use the ideas from our algorithm to construct a semi-supervised learner in the tolerant setting. This simple algorithm achieves comparable bounds to the previous (non-tolerant) semi-supervised algorithm, but avoids the use of intricate subroutines from previous works, and is “almost proper."} }
Endnote
%0 Conference Paper %T Simplifying Adversarially Robust PAC Learning With Tolerance %A Hassan Ashtiani %A Vinayak Pathak %A Ruth Urner %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-ashtiani25a %I PMLR %P 147--168 %U https://proceedings.mlr.press/v291/ashtiani25a.html %V 291 %X Adversarially robust PAC learning has proved to be challenging, with the currently best known learners relying on improper methods based on intricate compression schemes, resulting in sample complexity exponential in the VC-dimension. A series of follow up work considered a slightly relaxed version of the problem called adversarially robust learning \emph{with tolerance} and achieved better sample complexity in terms of the VC-dimension. However, those algorithms were either improper and complex, or required additional assumptions on the hypothesis class $\mathcal{H}$. We prove, for the first time, the existence of a simpler learner that achieves a sample complexity linear in the VC-dimension without requiring additional assumptions on $\mathcal{H}$. Even though our learner is improper, it is “almost proper" in the sense that it outputs a hypothesis that is “similar" to a hypothesis in $\mathcal{H}$. We also use the ideas from our algorithm to construct a semi-supervised learner in the tolerant setting. This simple algorithm achieves comparable bounds to the previous (non-tolerant) semi-supervised algorithm, but avoids the use of intricate subroutines from previous works, and is “almost proper."
APA
Ashtiani, H., Pathak, V. & Urner, R.. (2025). Simplifying Adversarially Robust PAC Learning With Tolerance. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:147-168 Available from https://proceedings.mlr.press/v291/ashtiani25a.html.

Related Material