Open problem: Convergence of single-timescale mean-field Langevin descent-ascent for two-player zero-sum games

Guillaume Wang, Lénaïc Chizat
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5345-5350, 2024.

Abstract

Let a smooth function $f: T^d \times T^d \to \mathbb{R}$ over the $d$-torus and $\beta>0$. Consider the min-max objective functional $F_\beta(\mu, \nu) = \iint f d\mu d\nu + \beta^{-1} H(\mu) - \beta^{-1} H(\nu)$ over $\mathcal{P}(T^d) \times \mathcal{P}(T^d)$, where $H$ denotes the negative differential entropy. Its unique saddle point defines the entropy-regularized mixed Nash equilibrium of a two-player zero-sum game, and its Wasserstein gradient descent-ascent flow $(\mu_t, \nu_t)$ corresponds to the mean-field limit of a Langevin descent-ascent dynamics. Do $\mu_t$ and $\nu_t$ converge (weakly, say) as $t \to \infty$, for any $f$ and $\beta$? This rather natural qualitative question is still open, and it is not clear whether it can be addressed using the tools currently available for the analysis of dynamics in Wasserstein space. Even though the simple trick of using a different timescale for the ascent versus the descent is known to guarantee convergence, we propose this question as a toy setting to further our understanding of the Wasserstein geometry for optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-wang24c, title = {Open problem: Convergence of single-timescale mean-field Langevin descent-ascent for two-player zero-sum games}, author = {Wang, Guillaume and Chizat, L{\'e}na\"{i}c}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {5345--5350}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/wang24c/wang24c.pdf}, url = {https://proceedings.mlr.press/v247/wang24c.html}, abstract = {Let a smooth function $f: T^d \times T^d \to \mathbb{R}$ over the $d$-torus and $\beta>0$. Consider the min-max objective functional $F_\beta(\mu, \nu) = \iint f d\mu d\nu + \beta^{-1} H(\mu) - \beta^{-1} H(\nu)$ over $\mathcal{P}(T^d) \times \mathcal{P}(T^d)$, where $H$ denotes the negative differential entropy. Its unique saddle point defines the entropy-regularized mixed Nash equilibrium of a two-player zero-sum game, and its Wasserstein gradient descent-ascent flow $(\mu_t, \nu_t)$ corresponds to the mean-field limit of a Langevin descent-ascent dynamics. Do $\mu_t$ and $\nu_t$ converge (weakly, say) as $t \to \infty$, for any $f$ and $\beta$? This rather natural qualitative question is still open, and it is not clear whether it can be addressed using the tools currently available for the analysis of dynamics in Wasserstein space. Even though the simple trick of using a different timescale for the ascent versus the descent is known to guarantee convergence, we propose this question as a toy setting to further our understanding of the Wasserstein geometry for optimization.} }
Endnote
%0 Conference Paper %T Open problem: Convergence of single-timescale mean-field Langevin descent-ascent for two-player zero-sum games %A Guillaume Wang %A Lénaïc Chizat %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-wang24c %I PMLR %P 5345--5350 %U https://proceedings.mlr.press/v247/wang24c.html %V 247 %X Let a smooth function $f: T^d \times T^d \to \mathbb{R}$ over the $d$-torus and $\beta>0$. Consider the min-max objective functional $F_\beta(\mu, \nu) = \iint f d\mu d\nu + \beta^{-1} H(\mu) - \beta^{-1} H(\nu)$ over $\mathcal{P}(T^d) \times \mathcal{P}(T^d)$, where $H$ denotes the negative differential entropy. Its unique saddle point defines the entropy-regularized mixed Nash equilibrium of a two-player zero-sum game, and its Wasserstein gradient descent-ascent flow $(\mu_t, \nu_t)$ corresponds to the mean-field limit of a Langevin descent-ascent dynamics. Do $\mu_t$ and $\nu_t$ converge (weakly, say) as $t \to \infty$, for any $f$ and $\beta$? This rather natural qualitative question is still open, and it is not clear whether it can be addressed using the tools currently available for the analysis of dynamics in Wasserstein space. Even though the simple trick of using a different timescale for the ascent versus the descent is known to guarantee convergence, we propose this question as a toy setting to further our understanding of the Wasserstein geometry for optimization.
APA
Wang, G. & Chizat, L.. (2024). Open problem: Convergence of single-timescale mean-field Langevin descent-ascent for two-player zero-sum games. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:5345-5350 Available from https://proceedings.mlr.press/v247/wang24c.html.

Related Material