A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Differentiable Games

Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, Gauthier Gidel
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2863-2873, 2020.

Abstract

We consider differentiable games where the goal is to find a Nash equilibrium. The machine learning community has recently started using variants of the gradient method (GD). Prime examples are extragradient (EG), the optimistic gradient method (OG) and consensus optimization (CO) which enjoy linear convergence in cases like bilinear games, where the standard GD fails. The full benefits of theses relatively new methods are not known as there is no unified analysis for both strongly monotone and bilinear games. We provide new analysis of the EG’s local and global convergence properties and use is to get a tighter global convergence rate for OG and CO. Our analysis covers the whole range of settings between bilinear and strongly monotone games. It reveals that these methods converges via different mechanisms at these extremes; in between, it exploits the most favorable mechanism for the given problem. We then prove that EG achieves the optimal rate for a wide class of algorithms with any number of extrapolations. Our tight analysis of EG’s convergence rate in games shows that, unlike in convex minimization, EG may be much faster than GD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-azizian20b, title = {A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Differentiable Games}, author = {Azizian, Wa\"iss and Mitliagkas, Ioannis and Lacoste-Julien, Simon and Gidel, Gauthier}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2863--2873}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/azizian20b/azizian20b.pdf}, url = {https://proceedings.mlr.press/v108/azizian20b.html}, abstract = {We consider differentiable games where the goal is to find a Nash equilibrium. The machine learning community has recently started using variants of the gradient method (GD). Prime examples are extragradient (EG), the optimistic gradient method (OG) and consensus optimization (CO) which enjoy linear convergence in cases like bilinear games, where the standard GD fails. The full benefits of theses relatively new methods are not known as there is no unified analysis for both strongly monotone and bilinear games. We provide new analysis of the EG’s local and global convergence properties and use is to get a tighter global convergence rate for OG and CO. Our analysis covers the whole range of settings between bilinear and strongly monotone games. It reveals that these methods converges via different mechanisms at these extremes; in between, it exploits the most favorable mechanism for the given problem. We then prove that EG achieves the optimal rate for a wide class of algorithms with any number of extrapolations. Our tight analysis of EG’s convergence rate in games shows that, unlike in convex minimization, EG may be much faster than GD. } }
Endnote
%0 Conference Paper %T A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Differentiable Games %A Waïss Azizian %A Ioannis Mitliagkas %A Simon Lacoste-Julien %A Gauthier Gidel %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-azizian20b %I PMLR %P 2863--2873 %U https://proceedings.mlr.press/v108/azizian20b.html %V 108 %X We consider differentiable games where the goal is to find a Nash equilibrium. The machine learning community has recently started using variants of the gradient method (GD). Prime examples are extragradient (EG), the optimistic gradient method (OG) and consensus optimization (CO) which enjoy linear convergence in cases like bilinear games, where the standard GD fails. The full benefits of theses relatively new methods are not known as there is no unified analysis for both strongly monotone and bilinear games. We provide new analysis of the EG’s local and global convergence properties and use is to get a tighter global convergence rate for OG and CO. Our analysis covers the whole range of settings between bilinear and strongly monotone games. It reveals that these methods converges via different mechanisms at these extremes; in between, it exploits the most favorable mechanism for the given problem. We then prove that EG achieves the optimal rate for a wide class of algorithms with any number of extrapolations. Our tight analysis of EG’s convergence rate in games shows that, unlike in convex minimization, EG may be much faster than GD.
APA
Azizian, W., Mitliagkas, I., Lacoste-Julien, S. & Gidel, G.. (2020). A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Differentiable Games. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2863-2873 Available from https://proceedings.mlr.press/v108/azizian20b.html.

Related Material