Reducing Variance of Stochastic Optimization for Approximating Nash Equilibria in Normal-Form Games

Linjian Meng, Wubing Chen, Wenbin Li, Tianpei Yang, Youzhi Zhang, Yang Gao
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:43716-43753, 2025.

Abstract

Nash equilibrium (NE) plays an important role in game theory. How to efficiently compute an NE in NFGs is challenging due to its complexity and non-convex optimization property. Machine Learning (ML), the cornerstone of modern artificial intelligence, has demonstrated remarkable empirical performance across various applications including non-convex optimization. To leverage non-convex stochastic optimization techniques from ML for approximating an NE, various loss functions have been proposed. Among these, only one loss function is unbiased, allowing for unbiased estimation under the sampled play. Unfortunately, this loss function suffers from high variance, which degrades the convergence rate. To improve the convergence rate by mitigating the high variance associated with the existing unbiased loss function, we propose a novel surrogate loss function named Nash Advantage Loss (NAL). NAL is theoretically proved unbiased and exhibits significantly lower variance than the existing unbiased loss function. Experimental results demonstrate that the algorithm minimizing NAL achieves a significantly faster empirical convergence rates compared to other algorithms, while also reducing the variance of estimated loss value by several orders of magnitude.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-meng25a, title = {Reducing Variance of Stochastic Optimization for Approximating {N}ash Equilibria in Normal-Form Games}, author = {Meng, Linjian and Chen, Wubing and Li, Wenbin and Yang, Tianpei and Zhang, Youzhi and Gao, Yang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {43716--43753}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/meng25a/meng25a.pdf}, url = {https://proceedings.mlr.press/v267/meng25a.html}, abstract = {Nash equilibrium (NE) plays an important role in game theory. How to efficiently compute an NE in NFGs is challenging due to its complexity and non-convex optimization property. Machine Learning (ML), the cornerstone of modern artificial intelligence, has demonstrated remarkable empirical performance across various applications including non-convex optimization. To leverage non-convex stochastic optimization techniques from ML for approximating an NE, various loss functions have been proposed. Among these, only one loss function is unbiased, allowing for unbiased estimation under the sampled play. Unfortunately, this loss function suffers from high variance, which degrades the convergence rate. To improve the convergence rate by mitigating the high variance associated with the existing unbiased loss function, we propose a novel surrogate loss function named Nash Advantage Loss (NAL). NAL is theoretically proved unbiased and exhibits significantly lower variance than the existing unbiased loss function. Experimental results demonstrate that the algorithm minimizing NAL achieves a significantly faster empirical convergence rates compared to other algorithms, while also reducing the variance of estimated loss value by several orders of magnitude.} }
Endnote
%0 Conference Paper %T Reducing Variance of Stochastic Optimization for Approximating Nash Equilibria in Normal-Form Games %A Linjian Meng %A Wubing Chen %A Wenbin Li %A Tianpei Yang %A Youzhi Zhang %A Yang Gao %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-meng25a %I PMLR %P 43716--43753 %U https://proceedings.mlr.press/v267/meng25a.html %V 267 %X Nash equilibrium (NE) plays an important role in game theory. How to efficiently compute an NE in NFGs is challenging due to its complexity and non-convex optimization property. Machine Learning (ML), the cornerstone of modern artificial intelligence, has demonstrated remarkable empirical performance across various applications including non-convex optimization. To leverage non-convex stochastic optimization techniques from ML for approximating an NE, various loss functions have been proposed. Among these, only one loss function is unbiased, allowing for unbiased estimation under the sampled play. Unfortunately, this loss function suffers from high variance, which degrades the convergence rate. To improve the convergence rate by mitigating the high variance associated with the existing unbiased loss function, we propose a novel surrogate loss function named Nash Advantage Loss (NAL). NAL is theoretically proved unbiased and exhibits significantly lower variance than the existing unbiased loss function. Experimental results demonstrate that the algorithm minimizing NAL achieves a significantly faster empirical convergence rates compared to other algorithms, while also reducing the variance of estimated loss value by several orders of magnitude.
APA
Meng, L., Chen, W., Li, W., Yang, T., Zhang, Y. & Gao, Y.. (2025). Reducing Variance of Stochastic Optimization for Approximating Nash Equilibria in Normal-Form Games. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:43716-43753 Available from https://proceedings.mlr.press/v267/meng25a.html.

Related Material