Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization

Jelena Diakonikolas, Constantinos Daskalakis, Michael I. Jordan
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2746-2754, 2021.

Abstract

The use of min-max optimization in the adversarial training of deep neural network classifiers, and the training of generative adversarial networks has motivated the study of nonconvex-nonconcave optimization objectives, which frequently arise in these applications. Unfortunately, recent results have established that even approximate first-order stationary points of such objectives are intractable, even under smoothness conditions, motivating the study of min-max objectives with additional structure. We introduce a new class of structured nonconvex-nonconcave min-max optimization problems, proposing a generalization of the extragradient algorithm which provably converges to a stationary point. The algorithm applies not only to Euclidean spaces, but also to general $\ell_p$-normed finite-dimensional real vector spaces. We also discuss its stability under stochastic oracles and provide bounds on its sample complexity. Our iteration complexity and sample complexity bounds either match or improve the best known bounds for the same or less general nonconvex-nonconcave settings, such as those that satisfy variational coherence or in which a weak solution to the associated variational inequality problem is assumed to exist.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-diakonikolas21a, title = { Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization }, author = {Diakonikolas, Jelena and Daskalakis, Constantinos and Jordan, Michael}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2746--2754}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/diakonikolas21a/diakonikolas21a.pdf}, url = {https://proceedings.mlr.press/v130/diakonikolas21a.html}, abstract = { The use of min-max optimization in the adversarial training of deep neural network classifiers, and the training of generative adversarial networks has motivated the study of nonconvex-nonconcave optimization objectives, which frequently arise in these applications. Unfortunately, recent results have established that even approximate first-order stationary points of such objectives are intractable, even under smoothness conditions, motivating the study of min-max objectives with additional structure. We introduce a new class of structured nonconvex-nonconcave min-max optimization problems, proposing a generalization of the extragradient algorithm which provably converges to a stationary point. The algorithm applies not only to Euclidean spaces, but also to general $\ell_p$-normed finite-dimensional real vector spaces. We also discuss its stability under stochastic oracles and provide bounds on its sample complexity. Our iteration complexity and sample complexity bounds either match or improve the best known bounds for the same or less general nonconvex-nonconcave settings, such as those that satisfy variational coherence or in which a weak solution to the associated variational inequality problem is assumed to exist. } }
Endnote
%0 Conference Paper %T Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization %A Jelena Diakonikolas %A Constantinos Daskalakis %A Michael I. Jordan %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-diakonikolas21a %I PMLR %P 2746--2754 %U https://proceedings.mlr.press/v130/diakonikolas21a.html %V 130 %X The use of min-max optimization in the adversarial training of deep neural network classifiers, and the training of generative adversarial networks has motivated the study of nonconvex-nonconcave optimization objectives, which frequently arise in these applications. Unfortunately, recent results have established that even approximate first-order stationary points of such objectives are intractable, even under smoothness conditions, motivating the study of min-max objectives with additional structure. We introduce a new class of structured nonconvex-nonconcave min-max optimization problems, proposing a generalization of the extragradient algorithm which provably converges to a stationary point. The algorithm applies not only to Euclidean spaces, but also to general $\ell_p$-normed finite-dimensional real vector spaces. We also discuss its stability under stochastic oracles and provide bounds on its sample complexity. Our iteration complexity and sample complexity bounds either match or improve the best known bounds for the same or less general nonconvex-nonconcave settings, such as those that satisfy variational coherence or in which a weak solution to the associated variational inequality problem is assumed to exist.
APA
Diakonikolas, J., Daskalakis, C. & Jordan, M.I.. (2021). Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2746-2754 Available from https://proceedings.mlr.press/v130/diakonikolas21a.html.

Related Material