Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation

Yuanhao Wang, Qinghua Liu, Yu Bai, Chi Jin
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:2793-2848, 2023.

Abstract

A unique challenge in Multi-Agent Reinforcement Learning (MARL) is the \emph{curse of multiagency}, where the description length of the game as well as the complexity of many existing learning algorithms scale \emph{exponentially} in the number of agents. While recent work successfully addresses this challenge under the model of tabular Markov Games, their mechanisms critically rely on the number of states being finite and small, and do not extend to practical scenarios with enormous state spaces where \emph{function approximation} must be used to approximate value functions or policies. This paper presents the first line of MARL algorithms that provably resolve the curse of multiagency under function approximation. We design a new algorithm—\emph{V-Learning with Policy Replay}, which gives the first \emph{polynomial} sample complexity results for learning approximate Coarse Correlated Equilibria (CCEs) of Markov Games under decentralized linear function approximation. Our algorithm always outputs Markov CCEs, and its sample complexity also significantly improves over the current best result for finding Markov CCEs in the tabular setting. This paper further presents an alternative algorithm—\emph{Decentralized Optimistic Policy Mirror Descent}, which finds policy-class-restricted CCEs using a polynomial number of samples. In exchange for learning a weaker version of CCEs, this algorithm applies to a wider range of problems under generic function approximation, such as linear quadratic games and MARL problems with low “marginal” Eluder dimension.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-wang23b, title = {Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation}, author = {Wang, Yuanhao and Liu, Qinghua and Bai, Yu and Jin, Chi}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {2793--2848}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/wang23b/wang23b.pdf}, url = {https://proceedings.mlr.press/v195/wang23b.html}, abstract = {A unique challenge in Multi-Agent Reinforcement Learning (MARL) is the \emph{curse of multiagency}, where the description length of the game as well as the complexity of many existing learning algorithms scale \emph{exponentially} in the number of agents. While recent work successfully addresses this challenge under the model of tabular Markov Games, their mechanisms critically rely on the number of states being finite and small, and do not extend to practical scenarios with enormous state spaces where \emph{function approximation} must be used to approximate value functions or policies. This paper presents the first line of MARL algorithms that provably resolve the curse of multiagency under function approximation. We design a new algorithm—\emph{V-Learning with Policy Replay}, which gives the first \emph{polynomial} sample complexity results for learning approximate Coarse Correlated Equilibria (CCEs) of Markov Games under decentralized linear function approximation. Our algorithm always outputs Markov CCEs, and its sample complexity also significantly improves over the current best result for finding Markov CCEs in the tabular setting. This paper further presents an alternative algorithm—\emph{Decentralized Optimistic Policy Mirror Descent}, which finds policy-class-restricted CCEs using a polynomial number of samples. In exchange for learning a weaker version of CCEs, this algorithm applies to a wider range of problems under generic function approximation, such as linear quadratic games and MARL problems with low “marginal” Eluder dimension.} }
Endnote
%0 Conference Paper %T Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation %A Yuanhao Wang %A Qinghua Liu %A Yu Bai %A Chi Jin %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-wang23b %I PMLR %P 2793--2848 %U https://proceedings.mlr.press/v195/wang23b.html %V 195 %X A unique challenge in Multi-Agent Reinforcement Learning (MARL) is the \emph{curse of multiagency}, where the description length of the game as well as the complexity of many existing learning algorithms scale \emph{exponentially} in the number of agents. While recent work successfully addresses this challenge under the model of tabular Markov Games, their mechanisms critically rely on the number of states being finite and small, and do not extend to practical scenarios with enormous state spaces where \emph{function approximation} must be used to approximate value functions or policies. This paper presents the first line of MARL algorithms that provably resolve the curse of multiagency under function approximation. We design a new algorithm—\emph{V-Learning with Policy Replay}, which gives the first \emph{polynomial} sample complexity results for learning approximate Coarse Correlated Equilibria (CCEs) of Markov Games under decentralized linear function approximation. Our algorithm always outputs Markov CCEs, and its sample complexity also significantly improves over the current best result for finding Markov CCEs in the tabular setting. This paper further presents an alternative algorithm—\emph{Decentralized Optimistic Policy Mirror Descent}, which finds policy-class-restricted CCEs using a polynomial number of samples. In exchange for learning a weaker version of CCEs, this algorithm applies to a wider range of problems under generic function approximation, such as linear quadratic games and MARL problems with low “marginal” Eluder dimension.
APA
Wang, Y., Liu, Q., Bai, Y. & Jin, C.. (2023). Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:2793-2848 Available from https://proceedings.mlr.press/v195/wang23b.html.

Related Material