[edit]
Forward Looking Best-Response Multiplicative Weights Update Methods for Bilinear Zero-sum Games
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:11096-11117, 2022.
Abstract
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror Descent (Mertikopoulos et al., 2019), uses a large learning rate for the intermediate gradient step which essentially leads to computing (approximate) best response strategies against the profile of the previous iteration. Although counter-intuitive at first sight due to the irrationally large, for an iterative algorithm, intermediate learning step, we prove that the method guarantees last-iterate convergence to an equilibrium. Particularly, we show that the algorithm reaches first an $\eta^{1/\rho}$-approximate Nash equilibrium, with $\rho > 1$, by decreasing the Kullback-Leibler divergence of each iterate by at least $\Omega(\eta^{1+\frac{1}{\rho}})$, for sufficiently small learning rate $\eta$, until the method becomes a contracting map, and converges to the exact equilibrium. Furthermore, we perform experimental comparisons with the optimistic variant of the multiplicative weights update method, by Daskalakis and Panageas (2019) and show that our algorithm has significant practical potential since it offers substantial gains in terms of accelerated convergence.