What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?

Chi Jin, Praneeth Netrapalli, Michael Jordan
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4880-4889, 2020.

Abstract

Minimax optimization has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training and multi-agent reinforcement learning. As most of these applications involve continuous nonconvex-nonconcave formulations, a very basic question arises—“what is a proper definition of local optima?” Most previous work answers this question using classical notions of equilibria from simultaneous games, where the min-player and the max-player act simultaneously. In contrast, most applications in machine learning, including GANs and adversarial training, correspond to sequential games, where the order of which player acts first is crucial (since minimax is in general not equal to maximin due to the nonconvex-nonconcave nature of the problems). The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting—local minimax, as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm—gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-jin20e, title = {What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?}, author = {Jin, Chi and Netrapalli, Praneeth and Jordan, Michael}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4880--4889}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/jin20e/jin20e.pdf}, url = {https://proceedings.mlr.press/v119/jin20e.html}, abstract = {Minimax optimization has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training and multi-agent reinforcement learning. As most of these applications involve continuous nonconvex-nonconcave formulations, a very basic question arises—“what is a proper definition of local optima?” Most previous work answers this question using classical notions of equilibria from simultaneous games, where the min-player and the max-player act simultaneously. In contrast, most applications in machine learning, including GANs and adversarial training, correspond to sequential games, where the order of which player acts first is crucial (since minimax is in general not equal to maximin due to the nonconvex-nonconcave nature of the problems). The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting—local minimax, as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm—gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points.} }
Endnote
%0 Conference Paper %T What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? %A Chi Jin %A Praneeth Netrapalli %A Michael Jordan %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-jin20e %I PMLR %P 4880--4889 %U https://proceedings.mlr.press/v119/jin20e.html %V 119 %X Minimax optimization has found extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training and multi-agent reinforcement learning. As most of these applications involve continuous nonconvex-nonconcave formulations, a very basic question arises—“what is a proper definition of local optima?” Most previous work answers this question using classical notions of equilibria from simultaneous games, where the min-player and the max-player act simultaneously. In contrast, most applications in machine learning, including GANs and adversarial training, correspond to sequential games, where the order of which player acts first is crucial (since minimax is in general not equal to maximin due to the nonconvex-nonconcave nature of the problems). The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting—local minimax, as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm—gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points.
APA
Jin, C., Netrapalli, P. & Jordan, M.. (2020). What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4880-4889 Available from https://proceedings.mlr.press/v119/jin20e.html.

Related Material