Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations

Xuyang Zhong, Yixiao Huang, Chen Liu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:61708-61726, 2024.

Abstract

This work studies sparse adversarial perturbations bounded by $l_0$ norm. We propose a white-box PGD-like attack method named sparse-PGD to effectively and efficiently generate such perturbations. Furthermore, we combine sparse-PGD with a black-box attack to comprehensively and more reliably evaluate the models’ robustness against $l_0$ bounded adversarial perturbations. Moreover, the efficiency of sparse-PGD enables us to conduct adversarial training to build robust models against sparse perturbations. Extensive experiments demonstrate that our proposed attack algorithm exhibits strong performance in different scenarios. More importantly, compared with other robust models, our adversarially trained model demonstrates state-of-the-art robustness against various sparse attacks. Codes are available at https://github.com/CityU-MLO/sPGD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhong24c, title = {Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations}, author = {Zhong, Xuyang and Huang, Yixiao and Liu, Chen}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {61708--61726}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhong24c/zhong24c.pdf}, url = {https://proceedings.mlr.press/v235/zhong24c.html}, abstract = {This work studies sparse adversarial perturbations bounded by $l_0$ norm. We propose a white-box PGD-like attack method named sparse-PGD to effectively and efficiently generate such perturbations. Furthermore, we combine sparse-PGD with a black-box attack to comprehensively and more reliably evaluate the models’ robustness against $l_0$ bounded adversarial perturbations. Moreover, the efficiency of sparse-PGD enables us to conduct adversarial training to build robust models against sparse perturbations. Extensive experiments demonstrate that our proposed attack algorithm exhibits strong performance in different scenarios. More importantly, compared with other robust models, our adversarially trained model demonstrates state-of-the-art robustness against various sparse attacks. Codes are available at https://github.com/CityU-MLO/sPGD.} }
Endnote
%0 Conference Paper %T Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations %A Xuyang Zhong %A Yixiao Huang %A Chen Liu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhong24c %I PMLR %P 61708--61726 %U https://proceedings.mlr.press/v235/zhong24c.html %V 235 %X This work studies sparse adversarial perturbations bounded by $l_0$ norm. We propose a white-box PGD-like attack method named sparse-PGD to effectively and efficiently generate such perturbations. Furthermore, we combine sparse-PGD with a black-box attack to comprehensively and more reliably evaluate the models’ robustness against $l_0$ bounded adversarial perturbations. Moreover, the efficiency of sparse-PGD enables us to conduct adversarial training to build robust models against sparse perturbations. Extensive experiments demonstrate that our proposed attack algorithm exhibits strong performance in different scenarios. More importantly, compared with other robust models, our adversarially trained model demonstrates state-of-the-art robustness against various sparse attacks. Codes are available at https://github.com/CityU-MLO/sPGD.
APA
Zhong, X., Huang, Y. & Liu, C.. (2024). Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:61708-61726 Available from https://proceedings.mlr.press/v235/zhong24c.html.

Related Material