[edit]
Convergence of Mean-Field Langevin Stochastic Descent-Ascent for Distributional Minimax Optimization
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:38869-38893, 2025.
Abstract
We study convergence properties of the discrete-time Mean-Field Langevin Stochastic Gradient Descent-Ascent (MFL-SGDA) algorithm for solving distributional minimax optimization. These problems arise in various applications, such as zero-sum games, generative adversarial networks and distributionally robust learning. Despite the significance of MFL-SGDA in these contexts, the discrete-time convergence rate remains underexplored. To address this gap, we establish a last-iterate convergence rate of $O(\frac{1}{\epsilon}\log\frac{1}{\epsilon})$ for MFL-SGDA. This rate is nearly optimal when compared to the complexity lower bound of its Euclidean counterpart. This rate also matches the complexity of mean-field Langevin stochastic gradient descent for distributional minimization and the outer-loop iteration complexity of an existing double-loop algorithm for distributional minimax problems. By leveraging an elementary analysis framework that avoids PDE-based techniques, we overcome previous limitations and achieve a faster convergence rate.