Removing Batch Normalization Boosts Adversarial Training

Haotao Wang, Aston Zhang, Shuai Zheng, Xingjian Shi, Mu Li, Zhangyang Wang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:23433-23445, 2022.

Abstract

Adversarial training (AT) defends deep neural networks against adversarial attacks. One challenge that limits its practical application is the performance degradation on clean samples. A major bottleneck identified by previous works is the widely used batch normalization (BN), which struggles to model the different statistics of clean and adversarial training samples in AT. Although the dominant approach is to extend BN to capture this mixture of distribution, we propose to completely eliminate this bottleneck by removing all BN layers in AT. Our normalizer-free robust training (NoFrost) method extends recent advances in normalizer-free networks to AT for its unexplored advantage on handling the mixture distribution challenge. We show that NoFrost achieves adversarial robustness with only a minor sacrifice on clean sample accuracy. On ImageNet with ResNet50, NoFrost achieves $74.06%$ clean accuracy, which drops merely $2.00%$ from standard training. In contrast, BN-based AT obtains $59.28%$ clean accuracy, suffering a significant $16.78%$ drop from standard training. In addition, NoFrost achieves a $23.56%$ adversarial robustness against PGD attack, which improves the $13.57%$ robustness in BN-based AT. We observe better model smoothness and larger decision margins from NoFrost, which make the models less sensitive to input perturbations and thus more robust. Moreover, when incorporating more data augmentations into NoFrost, it achieves comprehensive robustness against multiple distribution shifts. Code and pre-trained models are public at https://github.com/amazon-research/normalizer-free-robust-training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-wang22ap, title = {Removing Batch Normalization Boosts Adversarial Training}, author = {Wang, Haotao and Zhang, Aston and Zheng, Shuai and Shi, Xingjian and Li, Mu and Wang, Zhangyang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {23433--23445}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wang22ap/wang22ap.pdf}, url = {https://proceedings.mlr.press/v162/wang22ap.html}, abstract = {Adversarial training (AT) defends deep neural networks against adversarial attacks. One challenge that limits its practical application is the performance degradation on clean samples. A major bottleneck identified by previous works is the widely used batch normalization (BN), which struggles to model the different statistics of clean and adversarial training samples in AT. Although the dominant approach is to extend BN to capture this mixture of distribution, we propose to completely eliminate this bottleneck by removing all BN layers in AT. Our normalizer-free robust training (NoFrost) method extends recent advances in normalizer-free networks to AT for its unexplored advantage on handling the mixture distribution challenge. We show that NoFrost achieves adversarial robustness with only a minor sacrifice on clean sample accuracy. On ImageNet with ResNet50, NoFrost achieves $74.06%$ clean accuracy, which drops merely $2.00%$ from standard training. In contrast, BN-based AT obtains $59.28%$ clean accuracy, suffering a significant $16.78%$ drop from standard training. In addition, NoFrost achieves a $23.56%$ adversarial robustness against PGD attack, which improves the $13.57%$ robustness in BN-based AT. We observe better model smoothness and larger decision margins from NoFrost, which make the models less sensitive to input perturbations and thus more robust. Moreover, when incorporating more data augmentations into NoFrost, it achieves comprehensive robustness against multiple distribution shifts. Code and pre-trained models are public at https://github.com/amazon-research/normalizer-free-robust-training.} }
Endnote
%0 Conference Paper %T Removing Batch Normalization Boosts Adversarial Training %A Haotao Wang %A Aston Zhang %A Shuai Zheng %A Xingjian Shi %A Mu Li %A Zhangyang Wang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-wang22ap %I PMLR %P 23433--23445 %U https://proceedings.mlr.press/v162/wang22ap.html %V 162 %X Adversarial training (AT) defends deep neural networks against adversarial attacks. One challenge that limits its practical application is the performance degradation on clean samples. A major bottleneck identified by previous works is the widely used batch normalization (BN), which struggles to model the different statistics of clean and adversarial training samples in AT. Although the dominant approach is to extend BN to capture this mixture of distribution, we propose to completely eliminate this bottleneck by removing all BN layers in AT. Our normalizer-free robust training (NoFrost) method extends recent advances in normalizer-free networks to AT for its unexplored advantage on handling the mixture distribution challenge. We show that NoFrost achieves adversarial robustness with only a minor sacrifice on clean sample accuracy. On ImageNet with ResNet50, NoFrost achieves $74.06%$ clean accuracy, which drops merely $2.00%$ from standard training. In contrast, BN-based AT obtains $59.28%$ clean accuracy, suffering a significant $16.78%$ drop from standard training. In addition, NoFrost achieves a $23.56%$ adversarial robustness against PGD attack, which improves the $13.57%$ robustness in BN-based AT. We observe better model smoothness and larger decision margins from NoFrost, which make the models less sensitive to input perturbations and thus more robust. Moreover, when incorporating more data augmentations into NoFrost, it achieves comprehensive robustness against multiple distribution shifts. Code and pre-trained models are public at https://github.com/amazon-research/normalizer-free-robust-training.
APA
Wang, H., Zhang, A., Zheng, S., Shi, X., Li, M. & Wang, Z.. (2022). Removing Batch Normalization Boosts Adversarial Training. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:23433-23445 Available from https://proceedings.mlr.press/v162/wang22ap.html.

Related Material