Stronger and Faster Wasserstein Adversarial Attacks

Kaiwen Wu, Allen Wang, Yaoliang Yu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10377-10387, 2020.

Abstract

Deep models, while being extremely flexible and accurate, are surprisingly vulnerable to “small, imperceptible” perturbations known as adversarial attacks. While the majority of existing attacks focus on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks. However, constructing an effective attack under the Wasserstein metric is computationally much more challenging and calls for better optimization algorithms. We address this gap in two ways: (a) we develop an exact yet efficient projection operator to enable a stronger projected gradient attack; (b) we show that the Frank-Wolfe method equipped with a suitable linear minimization oracle works extremely fast under Wasserstein constraints. Our algorithms not only converge faster but also generate much stronger attacks. For instance, we decrease the accuracy of a residual network on CIFAR-10 to $3.4%$ within a Wasserstein perturbation ball of radius $0.005$, in contrast to $65.6%$ using the previous Wasserstein attack based on an \emph{approximate} projection operator. Furthermore, employing our stronger attacks in adversarial training significantly improves the robustness of adversarially trained models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wu20d, title = {Stronger and Faster {W}asserstein Adversarial Attacks}, author = {Wu, Kaiwen and Wang, Allen and Yu, Yaoliang}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10377--10387}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wu20d/wu20d.pdf}, url = {http://proceedings.mlr.press/v119/wu20d.html}, abstract = {Deep models, while being extremely flexible and accurate, are surprisingly vulnerable to “small, imperceptible” perturbations known as adversarial attacks. While the majority of existing attacks focus on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks. However, constructing an effective attack under the Wasserstein metric is computationally much more challenging and calls for better optimization algorithms. We address this gap in two ways: (a) we develop an exact yet efficient projection operator to enable a stronger projected gradient attack; (b) we show that the Frank-Wolfe method equipped with a suitable linear minimization oracle works extremely fast under Wasserstein constraints. Our algorithms not only converge faster but also generate much stronger attacks. For instance, we decrease the accuracy of a residual network on CIFAR-10 to $3.4%$ within a Wasserstein perturbation ball of radius $0.005$, in contrast to $65.6%$ using the previous Wasserstein attack based on an \emph{approximate} projection operator. Furthermore, employing our stronger attacks in adversarial training significantly improves the robustness of adversarially trained models.} }
Endnote
%0 Conference Paper %T Stronger and Faster Wasserstein Adversarial Attacks %A Kaiwen Wu %A Allen Wang %A Yaoliang Yu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wu20d %I PMLR %P 10377--10387 %U http://proceedings.mlr.press/v119/wu20d.html %V 119 %X Deep models, while being extremely flexible and accurate, are surprisingly vulnerable to “small, imperceptible” perturbations known as adversarial attacks. While the majority of existing attacks focus on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks. However, constructing an effective attack under the Wasserstein metric is computationally much more challenging and calls for better optimization algorithms. We address this gap in two ways: (a) we develop an exact yet efficient projection operator to enable a stronger projected gradient attack; (b) we show that the Frank-Wolfe method equipped with a suitable linear minimization oracle works extremely fast under Wasserstein constraints. Our algorithms not only converge faster but also generate much stronger attacks. For instance, we decrease the accuracy of a residual network on CIFAR-10 to $3.4%$ within a Wasserstein perturbation ball of radius $0.005$, in contrast to $65.6%$ using the previous Wasserstein attack based on an \emph{approximate} projection operator. Furthermore, employing our stronger attacks in adversarial training significantly improves the robustness of adversarially trained models.
APA
Wu, K., Wang, A. & Yu, Y.. (2020). Stronger and Faster Wasserstein Adversarial Attacks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10377-10387 Available from http://proceedings.mlr.press/v119/wu20d.html.

Related Material