[edit]
Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:4426-4445, 2022.
Abstract
Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) is guaranteed to converge to a stationary point with rate O(1/K^{1/2}). In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate O(1/K), which improves upon the rate O(1/K^{1/2}) of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.