Diversified Adversarial Attacks based on Conjugate Gradient Method

Keiichiro Yamamura, Haruki Sato, Nariaki Tateiwa, Nozomi Hata, Toru Mitsutake, Issa Oe, Hiroki Ishikura, Katsuki Fujisawa
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:24872-24894, 2022.

Abstract

Deep learning models are vulnerable to adversarial examples, and adversarial attacks used to generate such examples have attracted considerable research interest. Although existing methods based on the steepest descent have achieved high attack success rates, ill-conditioned problems occasionally reduce their performance. To address this limitation, we utilize the conjugate gradient (CG) method, which is effective for this type of problem, and propose a novel attack algorithm inspired by the CG method, named the Auto Conjugate Gradient (ACG) attack. The results of large-scale evaluation experiments conducted on the latest robust models show that, for most models, ACG was able to find more adversarial examples with fewer iterations than the existing SOTA algorithm Auto-PGD (APGD). We investigated the difference in search performance between ACG and APGD in terms of diversification and intensification, and define a measure called Diversity Index (DI) to quantify the degree of diversity. From the analysis of the diversity using this index, we show that the more diverse search of the proposed method remarkably improves its attack success rate.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-yamamura22a, title = {Diversified Adversarial Attacks based on Conjugate Gradient Method}, author = {Yamamura, Keiichiro and Sato, Haruki and Tateiwa, Nariaki and Hata, Nozomi and Mitsutake, Toru and Oe, Issa and Ishikura, Hiroki and Fujisawa, Katsuki}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {24872--24894}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/yamamura22a/yamamura22a.pdf}, url = {https://proceedings.mlr.press/v162/yamamura22a.html}, abstract = {Deep learning models are vulnerable to adversarial examples, and adversarial attacks used to generate such examples have attracted considerable research interest. Although existing methods based on the steepest descent have achieved high attack success rates, ill-conditioned problems occasionally reduce their performance. To address this limitation, we utilize the conjugate gradient (CG) method, which is effective for this type of problem, and propose a novel attack algorithm inspired by the CG method, named the Auto Conjugate Gradient (ACG) attack. The results of large-scale evaluation experiments conducted on the latest robust models show that, for most models, ACG was able to find more adversarial examples with fewer iterations than the existing SOTA algorithm Auto-PGD (APGD). We investigated the difference in search performance between ACG and APGD in terms of diversification and intensification, and define a measure called Diversity Index (DI) to quantify the degree of diversity. From the analysis of the diversity using this index, we show that the more diverse search of the proposed method remarkably improves its attack success rate.} }
Endnote
%0 Conference Paper %T Diversified Adversarial Attacks based on Conjugate Gradient Method %A Keiichiro Yamamura %A Haruki Sato %A Nariaki Tateiwa %A Nozomi Hata %A Toru Mitsutake %A Issa Oe %A Hiroki Ishikura %A Katsuki Fujisawa %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-yamamura22a %I PMLR %P 24872--24894 %U https://proceedings.mlr.press/v162/yamamura22a.html %V 162 %X Deep learning models are vulnerable to adversarial examples, and adversarial attacks used to generate such examples have attracted considerable research interest. Although existing methods based on the steepest descent have achieved high attack success rates, ill-conditioned problems occasionally reduce their performance. To address this limitation, we utilize the conjugate gradient (CG) method, which is effective for this type of problem, and propose a novel attack algorithm inspired by the CG method, named the Auto Conjugate Gradient (ACG) attack. The results of large-scale evaluation experiments conducted on the latest robust models show that, for most models, ACG was able to find more adversarial examples with fewer iterations than the existing SOTA algorithm Auto-PGD (APGD). We investigated the difference in search performance between ACG and APGD in terms of diversification and intensification, and define a measure called Diversity Index (DI) to quantify the degree of diversity. From the analysis of the diversity using this index, we show that the more diverse search of the proposed method remarkably improves its attack success rate.
APA
Yamamura, K., Sato, H., Tateiwa, N., Hata, N., Mitsutake, T., Oe, I., Ishikura, H. & Fujisawa, K.. (2022). Diversified Adversarial Attacks based on Conjugate Gradient Method. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:24872-24894 Available from https://proceedings.mlr.press/v162/yamamura22a.html.

Related Material