Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

Beomsu Kim, Junghoon Seo
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:4426-4445, 2022.

Abstract

Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) is guaranteed to converge to a stationary point with rate O(1/K^{1/2}). In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate O(1/K), which improves upon the rate O(1/K^{1/2}) of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-kim22c, title = { Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness }, author = {Kim, Beomsu and Seo, Junghoon}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {4426--4445}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/kim22c/kim22c.pdf}, url = {https://proceedings.mlr.press/v151/kim22c.html}, abstract = { Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) is guaranteed to converge to a stationary point with rate O(1/K^{1/2}). In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate O(1/K), which improves upon the rate O(1/K^{1/2}) of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness. } }
Endnote
%0 Conference Paper %T Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness %A Beomsu Kim %A Junghoon Seo %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-kim22c %I PMLR %P 4426--4445 %U https://proceedings.mlr.press/v151/kim22c.html %V 151 %X Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) is guaranteed to converge to a stationary point with rate O(1/K^{1/2}). In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate O(1/K), which improves upon the rate O(1/K^{1/2}) of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.
APA
Kim, B. & Seo, J.. (2022). Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:4426-4445 Available from https://proceedings.mlr.press/v151/kim22c.html.

Related Material