Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning

Batuhan Yardim, Niao He
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:31-44, 2025.

Abstract

Mean-field games (MFG) have become significant tools for solving large-scale multi-agent reinforcement learning problems under symmetry. However, the assumptions of access to a known MFG model (which might not be available for real-world games) and of exact symmetry (real-world scenarios often feature heterogeneity) limit the applicability of MFGs. In this work, we broaden the applicability of MFGs by providing a methodology to extend any finite-player, possibly asymmetric, game to an “induced MFG”. First, we prove that $N$-player dynamic games can be symmetrized and smoothly extended to the infinite-player continuum via Kirszbraun extensions. Next, we define $\alpha,\beta$-symmetric games, a new class of dynamic games that incorporate approximate permutation invariance. We establish explicit approximation bounds for $\alpha,\beta$-symmetric games, demonstrating that the induced mean-field Nash policy is an approximate Nash of the $N$-player game. We analyze TD learning using sample trajectories of the $N$-player game, permitting learning without using an explicit MFG model or oracle. This is used to show a sample complexity of $\widetilde{\mathcal{O}}(\varepsilon^{-6})$ for $N$-agent monotone extendable games to learn an $\varepsilon$-Nash. Evaluations on benchmarks with thousands of agents support our theory of learning under (approximate) symmetry without explicit MFGs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-yardim25a, title = {Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning}, author = {Yardim, Batuhan and He, Niao}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {31--44}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/yardim25a/yardim25a.pdf}, url = {https://proceedings.mlr.press/v283/yardim25a.html}, abstract = {Mean-field games (MFG) have become significant tools for solving large-scale multi-agent reinforcement learning problems under symmetry. However, the assumptions of access to a known MFG model (which might not be available for real-world games) and of exact symmetry (real-world scenarios often feature heterogeneity) limit the applicability of MFGs. In this work, we broaden the applicability of MFGs by providing a methodology to extend any finite-player, possibly asymmetric, game to an “induced MFG”. First, we prove that $N$-player dynamic games can be symmetrized and smoothly extended to the infinite-player continuum via Kirszbraun extensions. Next, we define $\alpha,\beta$-symmetric games, a new class of dynamic games that incorporate approximate permutation invariance. We establish explicit approximation bounds for $\alpha,\beta$-symmetric games, demonstrating that the induced mean-field Nash policy is an approximate Nash of the $N$-player game. We analyze TD learning using sample trajectories of the $N$-player game, permitting learning without using an explicit MFG model or oracle. This is used to show a sample complexity of $\widetilde{\mathcal{O}}(\varepsilon^{-6})$ for $N$-agent monotone extendable games to learn an $\varepsilon$-Nash. Evaluations on benchmarks with thousands of agents support our theory of learning under (approximate) symmetry without explicit MFGs.} }
Endnote
%0 Conference Paper %T Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning %A Batuhan Yardim %A Niao He %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-yardim25a %I PMLR %P 31--44 %U https://proceedings.mlr.press/v283/yardim25a.html %V 283 %X Mean-field games (MFG) have become significant tools for solving large-scale multi-agent reinforcement learning problems under symmetry. However, the assumptions of access to a known MFG model (which might not be available for real-world games) and of exact symmetry (real-world scenarios often feature heterogeneity) limit the applicability of MFGs. In this work, we broaden the applicability of MFGs by providing a methodology to extend any finite-player, possibly asymmetric, game to an “induced MFG”. First, we prove that $N$-player dynamic games can be symmetrized and smoothly extended to the infinite-player continuum via Kirszbraun extensions. Next, we define $\alpha,\beta$-symmetric games, a new class of dynamic games that incorporate approximate permutation invariance. We establish explicit approximation bounds for $\alpha,\beta$-symmetric games, demonstrating that the induced mean-field Nash policy is an approximate Nash of the $N$-player game. We analyze TD learning using sample trajectories of the $N$-player game, permitting learning without using an explicit MFG model or oracle. This is used to show a sample complexity of $\widetilde{\mathcal{O}}(\varepsilon^{-6})$ for $N$-agent monotone extendable games to learn an $\varepsilon$-Nash. Evaluations on benchmarks with thousands of agents support our theory of learning under (approximate) symmetry without explicit MFGs.
APA
Yardim, B. & He, N.. (2025). Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:31-44 Available from https://proceedings.mlr.press/v283/yardim25a.html.

Related Material