A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs

Kasimir Tanner, Matteo Vilucchio, Bruno Loureiro, Florent Krzakala
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2530-2538, 2025.

Abstract

This work investigates adversarial training in the context of margin-based linear classifiers in the high-dimensional regime where the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $\alpha = n / d$. We introduce a tractable mathematical model where the interplay between the data and adversarial attacker geometries can be studied, while capturing the core phenomenology observed in the adversarial robustness literature. Our main theoretical contribution is an exact asymptotic description of the sufficient statistics for the adversarial empirical risk minimiser, under generic convex and non-increasing losses for a Block Feature Model. Our result allow us to precisely characterise which directions in the data are associated with a higher generalisation/robustness trade-off, as defined by a robustness and a usefulness metric. This goes beyond previous models in the literature, which fail to capture a difference in performance between adversarially trained models in the high sample complexity regime. In particular, we unveil the existence of directions which can be defended without penalising accuracy. Finally, we show the advantage of defending non-robust features during training, identifying a uniform protection as an inherently effective defence mechanism.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-tanner25a, title = {A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs}, author = {Tanner, Kasimir and Vilucchio, Matteo and Loureiro, Bruno and Krzakala, Florent}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2530--2538}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/tanner25a/tanner25a.pdf}, url = {https://proceedings.mlr.press/v258/tanner25a.html}, abstract = {This work investigates adversarial training in the context of margin-based linear classifiers in the high-dimensional regime where the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $\alpha = n / d$. We introduce a tractable mathematical model where the interplay between the data and adversarial attacker geometries can be studied, while capturing the core phenomenology observed in the adversarial robustness literature. Our main theoretical contribution is an exact asymptotic description of the sufficient statistics for the adversarial empirical risk minimiser, under generic convex and non-increasing losses for a Block Feature Model. Our result allow us to precisely characterise which directions in the data are associated with a higher generalisation/robustness trade-off, as defined by a robustness and a usefulness metric. This goes beyond previous models in the literature, which fail to capture a difference in performance between adversarially trained models in the high sample complexity regime. In particular, we unveil the existence of directions which can be defended without penalising accuracy. Finally, we show the advantage of defending non-robust features during training, identifying a uniform protection as an inherently effective defence mechanism.} }
Endnote
%0 Conference Paper %T A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs %A Kasimir Tanner %A Matteo Vilucchio %A Bruno Loureiro %A Florent Krzakala %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-tanner25a %I PMLR %P 2530--2538 %U https://proceedings.mlr.press/v258/tanner25a.html %V 258 %X This work investigates adversarial training in the context of margin-based linear classifiers in the high-dimensional regime where the dimension $d$ and the number of data points $n$ diverge with a fixed ratio $\alpha = n / d$. We introduce a tractable mathematical model where the interplay between the data and adversarial attacker geometries can be studied, while capturing the core phenomenology observed in the adversarial robustness literature. Our main theoretical contribution is an exact asymptotic description of the sufficient statistics for the adversarial empirical risk minimiser, under generic convex and non-increasing losses for a Block Feature Model. Our result allow us to precisely characterise which directions in the data are associated with a higher generalisation/robustness trade-off, as defined by a robustness and a usefulness metric. This goes beyond previous models in the literature, which fail to capture a difference in performance between adversarially trained models in the high sample complexity regime. In particular, we unveil the existence of directions which can be defended without penalising accuracy. Finally, we show the advantage of defending non-robust features during training, identifying a uniform protection as an inherently effective defence mechanism.
APA
Tanner, K., Vilucchio, M., Loureiro, B. & Krzakala, F.. (2025). A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2530-2538 Available from https://proceedings.mlr.press/v258/tanner25a.html.

Related Material