Phase-aware Adversarial Defense for Improving Adversarial Robustness

Dawei Zhou, Nannan Wang, Heng Yang, Xinbo Gao, Tongliang Liu
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:42724-42741, 2023.

Abstract

Deep neural networks have been found to be vulnerable to adversarial noise. Recent works show that exploring the impact of adversarial noise on intrinsic components of data can help improve adversarial robustness. However, the pattern closely related to human perception has not been deeply studied. In this paper, inspired by the cognitive science, we investigate the interference of adversarial noise from the perspective of image phase, and find ordinarily-trained models lack enough robustness against phase-level perturbations. Motivated by this, we propose a joint adversarial defense method: a phase-level adversarial training mechanism to enhance the adversarial robustness on the phase pattern; an amplitude-based pre-processing operation to mitigate the adversarial perturbation in the amplitude pattern. Experimental results show that the proposed method can significantly improve the robust accuracy against multiple attacks and even adaptive attacks. In addition, ablation studies demonstrate the effectiveness of our defense strategy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-zhou23m, title = {Phase-aware Adversarial Defense for Improving Adversarial Robustness}, author = {Zhou, Dawei and Wang, Nannan and Yang, Heng and Gao, Xinbo and Liu, Tongliang}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {42724--42741}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/zhou23m/zhou23m.pdf}, url = {https://proceedings.mlr.press/v202/zhou23m.html}, abstract = {Deep neural networks have been found to be vulnerable to adversarial noise. Recent works show that exploring the impact of adversarial noise on intrinsic components of data can help improve adversarial robustness. However, the pattern closely related to human perception has not been deeply studied. In this paper, inspired by the cognitive science, we investigate the interference of adversarial noise from the perspective of image phase, and find ordinarily-trained models lack enough robustness against phase-level perturbations. Motivated by this, we propose a joint adversarial defense method: a phase-level adversarial training mechanism to enhance the adversarial robustness on the phase pattern; an amplitude-based pre-processing operation to mitigate the adversarial perturbation in the amplitude pattern. Experimental results show that the proposed method can significantly improve the robust accuracy against multiple attacks and even adaptive attacks. In addition, ablation studies demonstrate the effectiveness of our defense strategy.} }
Endnote
%0 Conference Paper %T Phase-aware Adversarial Defense for Improving Adversarial Robustness %A Dawei Zhou %A Nannan Wang %A Heng Yang %A Xinbo Gao %A Tongliang Liu %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-zhou23m %I PMLR %P 42724--42741 %U https://proceedings.mlr.press/v202/zhou23m.html %V 202 %X Deep neural networks have been found to be vulnerable to adversarial noise. Recent works show that exploring the impact of adversarial noise on intrinsic components of data can help improve adversarial robustness. However, the pattern closely related to human perception has not been deeply studied. In this paper, inspired by the cognitive science, we investigate the interference of adversarial noise from the perspective of image phase, and find ordinarily-trained models lack enough robustness against phase-level perturbations. Motivated by this, we propose a joint adversarial defense method: a phase-level adversarial training mechanism to enhance the adversarial robustness on the phase pattern; an amplitude-based pre-processing operation to mitigate the adversarial perturbation in the amplitude pattern. Experimental results show that the proposed method can significantly improve the robust accuracy against multiple attacks and even adaptive attacks. In addition, ablation studies demonstrate the effectiveness of our defense strategy.
APA
Zhou, D., Wang, N., Yang, H., Gao, X. & Liu, T.. (2023). Phase-aware Adversarial Defense for Improving Adversarial Robustness. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:42724-42741 Available from https://proceedings.mlr.press/v202/zhou23m.html.

Related Material