Efficient Optimization Algorithms for Linear Adversarial Training

Antonio H. Ribeiro, Thomas B. Schön, Dave Zachariah, Francis Bach
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1207-1215, 2025.

Abstract

Adversarial training can be used to learn models that are robust against perturbations. For linear models, it can be formulated as a convex optimization problem. Compared to methods proposed in the context of deep learning, leveraging the optimization structure allows significantly faster convergence rates. Still, the use of generic convex solvers can be inefficient for large-scale problems. Here, we propose tailored optimization algorithms for the adversarial training of linear models, which render large-scale regression and classification problems more tractable. For regression problems, we propose a family of solvers based on iterative ridge regression and, for classification, a family of solvers based on projected gradient descent. The methods are based on extended variable reformulations of the original problem. We illustrate their efficiency in numerical examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-ribeiro25a, title = {Efficient Optimization Algorithms for Linear Adversarial Training}, author = {Ribeiro, Antonio H. and Sch{\"o}n, Thomas B. and Zachariah, Dave and Bach, Francis}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1207--1215}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/ribeiro25a/ribeiro25a.pdf}, url = {https://proceedings.mlr.press/v258/ribeiro25a.html}, abstract = {Adversarial training can be used to learn models that are robust against perturbations. For linear models, it can be formulated as a convex optimization problem. Compared to methods proposed in the context of deep learning, leveraging the optimization structure allows significantly faster convergence rates. Still, the use of generic convex solvers can be inefficient for large-scale problems. Here, we propose tailored optimization algorithms for the adversarial training of linear models, which render large-scale regression and classification problems more tractable. For regression problems, we propose a family of solvers based on iterative ridge regression and, for classification, a family of solvers based on projected gradient descent. The methods are based on extended variable reformulations of the original problem. We illustrate their efficiency in numerical examples.} }
Endnote
%0 Conference Paper %T Efficient Optimization Algorithms for Linear Adversarial Training %A Antonio H. Ribeiro %A Thomas B. Schön %A Dave Zachariah %A Francis Bach %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-ribeiro25a %I PMLR %P 1207--1215 %U https://proceedings.mlr.press/v258/ribeiro25a.html %V 258 %X Adversarial training can be used to learn models that are robust against perturbations. For linear models, it can be formulated as a convex optimization problem. Compared to methods proposed in the context of deep learning, leveraging the optimization structure allows significantly faster convergence rates. Still, the use of generic convex solvers can be inefficient for large-scale problems. Here, we propose tailored optimization algorithms for the adversarial training of linear models, which render large-scale regression and classification problems more tractable. For regression problems, we propose a family of solvers based on iterative ridge regression and, for classification, a family of solvers based on projected gradient descent. The methods are based on extended variable reformulations of the original problem. We illustrate their efficiency in numerical examples.
APA
Ribeiro, A.H., Schön, T.B., Zachariah, D. & Bach, F.. (2025). Efficient Optimization Algorithms for Linear Adversarial Training. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1207-1215 Available from https://proceedings.mlr.press/v258/ribeiro25a.html.

Related Material