Linear Convergence of the PrimalDual Gradient Method for ConvexConcave Saddle Point Problems without Strong Convexity
[edit]
Proceedings of Machine Learning Research, PMLR 89:196205, 2019.
Abstract
We consider the convexconcave saddle point problem $\min_{x}\max_{y} f(x)+y^\top A xg(y)$ where $f$ is smooth and convex and $g$ is smooth and strongly convex. We prove that if the coupling matrix $A$ has full column rank, the vanilla primaldual gradient method can achieve linear convergence even if $f$ is not strongly convex. Our result generalizes previous work which either requires $f$ and $g$ to be quadratic functions or requires proximal mappings for both $f$ and $g$. We adopt a novel analysis technique that in each iteration uses a "ghost" update as a reference, and show that the iterates in the primaldual gradient method converge to this "ghost" sequence. Using the same technique we further give an analysis for the primaldual stochastic variance reduced gradient method for convexconcave saddle point problems with a finitesum structure.
Related Material


