Towards Stable and Efficient Adversarial Training against $l_1$ Bounded Adversarial Attacks

Yulun Jiang, Chen Liu, Zhichao Huang, Mathieu Salzmann, Sabine Susstrunk
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:15089-15104, 2023.

Abstract

We address the problem of stably and efficiently training a deep neural network robust to adversarial perturbations bounded by an $l_1$ norm. We demonstrate that achieving robustness against $l_1$-bounded perturbations is more challenging than in the $l_2$ or $l_\infty$ cases, because adversarial training against $l_1$-bounded perturbations is more likely to suffer from catastrophic overfitting and yield training instabilities. Our analysis links these issues to the coordinate descent strategy used in existing methods. We address this by introducing Fast-EG-$l_1$, an efficient adversarial training algorithm based on Euclidean geometry and free of coordinate descent. Fast-EG-$l_1$ comes with no additional memory costs and no extra hyper-parameters to tune. Our experimental results on various datasets demonstrate that Fast-EG-$l_1$ yields the best and most stable robustness against $l_1$-bounded adversarial attacks among the methods of comparable computational complexity. Code and the checkpoints are available at https://github.com/IVRL/FastAdvL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-jiang23f, title = {Towards Stable and Efficient Adversarial Training against $l_1$ Bounded Adversarial Attacks}, author = {Jiang, Yulun and Liu, Chen and Huang, Zhichao and Salzmann, Mathieu and Susstrunk, Sabine}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {15089--15104}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/jiang23f/jiang23f.pdf}, url = {https://proceedings.mlr.press/v202/jiang23f.html}, abstract = {We address the problem of stably and efficiently training a deep neural network robust to adversarial perturbations bounded by an $l_1$ norm. We demonstrate that achieving robustness against $l_1$-bounded perturbations is more challenging than in the $l_2$ or $l_\infty$ cases, because adversarial training against $l_1$-bounded perturbations is more likely to suffer from catastrophic overfitting and yield training instabilities. Our analysis links these issues to the coordinate descent strategy used in existing methods. We address this by introducing Fast-EG-$l_1$, an efficient adversarial training algorithm based on Euclidean geometry and free of coordinate descent. Fast-EG-$l_1$ comes with no additional memory costs and no extra hyper-parameters to tune. Our experimental results on various datasets demonstrate that Fast-EG-$l_1$ yields the best and most stable robustness against $l_1$-bounded adversarial attacks among the methods of comparable computational complexity. Code and the checkpoints are available at https://github.com/IVRL/FastAdvL.} }
Endnote
%0 Conference Paper %T Towards Stable and Efficient Adversarial Training against $l_1$ Bounded Adversarial Attacks %A Yulun Jiang %A Chen Liu %A Zhichao Huang %A Mathieu Salzmann %A Sabine Susstrunk %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-jiang23f %I PMLR %P 15089--15104 %U https://proceedings.mlr.press/v202/jiang23f.html %V 202 %X We address the problem of stably and efficiently training a deep neural network robust to adversarial perturbations bounded by an $l_1$ norm. We demonstrate that achieving robustness against $l_1$-bounded perturbations is more challenging than in the $l_2$ or $l_\infty$ cases, because adversarial training against $l_1$-bounded perturbations is more likely to suffer from catastrophic overfitting and yield training instabilities. Our analysis links these issues to the coordinate descent strategy used in existing methods. We address this by introducing Fast-EG-$l_1$, an efficient adversarial training algorithm based on Euclidean geometry and free of coordinate descent. Fast-EG-$l_1$ comes with no additional memory costs and no extra hyper-parameters to tune. Our experimental results on various datasets demonstrate that Fast-EG-$l_1$ yields the best and most stable robustness against $l_1$-bounded adversarial attacks among the methods of comparable computational complexity. Code and the checkpoints are available at https://github.com/IVRL/FastAdvL.
APA
Jiang, Y., Liu, C., Huang, Z., Salzmann, M. & Susstrunk, S.. (2023). Towards Stable and Efficient Adversarial Training against $l_1$ Bounded Adversarial Attacks. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:15089-15104 Available from https://proceedings.mlr.press/v202/jiang23f.html.

Related Material