Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm

Taeho Yoon, Ernest K Ryu
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12098-12109, 2021.

Abstract

In this work, we study the computational complexity of reducing the squared gradient magnitude for smooth minimax optimization problems. First, we present algorithms with accelerated O(1/k2) last-iterate rates, faster than the existing O(1/k) or slower rates for extragradient, Popov, and gradient descent with anchoring. The acceleration mechanism combines extragradient steps with anchoring and is distinct from Nesterov’s acceleration. We then establish optimality of the O(1/k2) rate through a matching lower bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-yoon21d, title = {Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm}, author = {Yoon, Taeho and Ryu, Ernest K}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {12098--12109}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/yoon21d/yoon21d.pdf}, url = {https://proceedings.mlr.press/v139/yoon21d.html}, abstract = {In this work, we study the computational complexity of reducing the squared gradient magnitude for smooth minimax optimization problems. First, we present algorithms with accelerated $\mathcal{O}(1/k^2)$ last-iterate rates, faster than the existing $\mathcal{O}(1/k)$ or slower rates for extragradient, Popov, and gradient descent with anchoring. The acceleration mechanism combines extragradient steps with anchoring and is distinct from Nesterov’s acceleration. We then establish optimality of the $\mathcal{O}(1/k^2)$ rate through a matching lower bound.} }
Endnote
%0 Conference Paper %T Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm %A Taeho Yoon %A Ernest K Ryu %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-yoon21d %I PMLR %P 12098--12109 %U https://proceedings.mlr.press/v139/yoon21d.html %V 139 %X In this work, we study the computational complexity of reducing the squared gradient magnitude for smooth minimax optimization problems. First, we present algorithms with accelerated $\mathcal{O}(1/k^2)$ last-iterate rates, faster than the existing $\mathcal{O}(1/k)$ or slower rates for extragradient, Popov, and gradient descent with anchoring. The acceleration mechanism combines extragradient steps with anchoring and is distinct from Nesterov’s acceleration. We then establish optimality of the $\mathcal{O}(1/k^2)$ rate through a matching lower bound.
APA
Yoon, T. & Ryu, E.K.. (2021). Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with O(1/k^2) Rate on Squared Gradient Norm. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:12098-12109 Available from https://proceedings.mlr.press/v139/yoon21d.html.

Related Material