Robust Empirical Risk Minimization with Tolerance

Robi Bhattacharjee, Max Hopkins, Akash Kumar, Hantao Yu, Kamalika Chaudhuri
Proceedings of The 34th International Conference on Algorithmic Learning Theory, PMLR 201:182-203, 2023.

Abstract

Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today’s tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) \textit{empirical risk minimization} (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes \citep{Omar19}, a bound we show extends even to ‘nice’ settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called \textit{tolerant} robust learning \citep{Urner22} where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for $\gamma$-tolerant robust learning VC classes over $\mathbb{R}^d$, and requires only $\tilde{O}\left( \frac{VC(H)d\log \frac{D}{\gamma\delta}}{\epsilon^2}\right)$ samples for robustness regions of (maximum) diameter $D$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v201-bhattacharjee23a, title = {Robust Empirical Risk Minimization with Tolerance}, author = {Bhattacharjee, Robi and Hopkins, Max and Kumar, Akash and Yu, Hantao and Chaudhuri, Kamalika}, booktitle = {Proceedings of The 34th International Conference on Algorithmic Learning Theory}, pages = {182--203}, year = {2023}, editor = {Agrawal, Shipra and Orabona, Francesco}, volume = {201}, series = {Proceedings of Machine Learning Research}, month = {20 Feb--23 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v201/bhattacharjee23a/bhattacharjee23a.pdf}, url = {https://proceedings.mlr.press/v201/bhattacharjee23a.html}, abstract = {Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today’s tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) \textit{empirical risk minimization} (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes \citep{Omar19}, a bound we show extends even to ‘nice’ settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called \textit{tolerant} robust learning \citep{Urner22} where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for $\gamma$-tolerant robust learning VC classes over $\mathbb{R}^d$, and requires only $\tilde{O}\left( \frac{VC(H)d\log \frac{D}{\gamma\delta}}{\epsilon^2}\right)$ samples for robustness regions of (maximum) diameter $D$.} }
Endnote
%0 Conference Paper %T Robust Empirical Risk Minimization with Tolerance %A Robi Bhattacharjee %A Max Hopkins %A Akash Kumar %A Hantao Yu %A Kamalika Chaudhuri %B Proceedings of The 34th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Shipra Agrawal %E Francesco Orabona %F pmlr-v201-bhattacharjee23a %I PMLR %P 182--203 %U https://proceedings.mlr.press/v201/bhattacharjee23a.html %V 201 %X Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today’s tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) \textit{empirical risk minimization} (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes \citep{Omar19}, a bound we show extends even to ‘nice’ settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called \textit{tolerant} robust learning \citep{Urner22} where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for $\gamma$-tolerant robust learning VC classes over $\mathbb{R}^d$, and requires only $\tilde{O}\left( \frac{VC(H)d\log \frac{D}{\gamma\delta}}{\epsilon^2}\right)$ samples for robustness regions of (maximum) diameter $D$.
APA
Bhattacharjee, R., Hopkins, M., Kumar, A., Yu, H. & Chaudhuri, K.. (2023). Robust Empirical Risk Minimization with Tolerance. Proceedings of The 34th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 201:182-203 Available from https://proceedings.mlr.press/v201/bhattacharjee23a.html.

Related Material