[edit]
Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10504-10513, 2020.
Abstract
This paper studies the lower bound complexity for minimax optimization problem whose objective function is the average of n individual smooth convex-concave functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component. For the strongly-convex-strongly-concave case, we prove such an algorithm can not reach an ε-suboptimal point in fewer than Ω((n+κ)log(1/ε)) iterations, where κ is the condition number of the objective function. This lower bound matches the upper bound of the existing incremental first-order oracle algorithm stochastic variance-reduced extragradient. We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into n groups. This construction is friendly to the analysis of incremental gradient and proximal oracle and we also extend the analysis to general convex-concave cases.