Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11278-11287, 2020.

Abstract

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question{—}do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel formulation of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial data (i.e., friendly adversarial data) minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively{—}adversarial robustness can indeed be achieved without compromising the natural generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20z, title = {Attacks Which Do Not Kill Training Make Adversarial Learning Stronger}, author = {Zhang, Jingfeng and Xu, Xilie and Han, Bo and Niu, Gang and Cui, Lizhen and Sugiyama, Masashi and Kankanhalli, Mohan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11278--11287}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20z/zhang20z.pdf}, url = {https://proceedings.mlr.press/v119/zhang20z.html}, abstract = {Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question{—}do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel formulation of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial data (i.e., friendly adversarial data) minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively{—}adversarial robustness can indeed be achieved without compromising the natural generalization.} }
Endnote
%0 Conference Paper %T Attacks Which Do Not Kill Training Make Adversarial Learning Stronger %A Jingfeng Zhang %A Xilie Xu %A Bo Han %A Gang Niu %A Lizhen Cui %A Masashi Sugiyama %A Mohan Kankanhalli %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20z %I PMLR %P 11278--11287 %U https://proceedings.mlr.press/v119/zhang20z.html %V 119 %X Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question{—}do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel formulation of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial data (i.e., friendly adversarial data) minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively{—}adversarial robustness can indeed be achieved without compromising the natural generalization.
APA
Zhang, J., Xu, X., Han, B., Niu, G., Cui, L., Sugiyama, M. & Kankanhalli, M.. (2020). Attacks Which Do Not Kill Training Make Adversarial Learning Stronger. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11278-11287 Available from https://proceedings.mlr.press/v119/zhang20z.html.

Related Material