The Complexity of Markov Equilibrium in Stochastic Games

Constantinos Daskalakis, Noah Golowich, Kaiqing Zhang
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:4180-4234, 2023.

Abstract

We show that computing approximate stationary Markov coarse correlated equilibria (CCE) in general-sum stochastic games is PPAD-hard, even when there are two players, the game is turn-based, the discount factor is an absolute constant, and the approximation is an absolute constant. Our intractability results stand in sharp contrast to the results in normal-form games, where exact CCEs are efficiently computable. A fortiori, our results imply that, in the setting of multi-agent reinforcement learning (MARL), it is computationally hard to learn stationary Markov CCE policies in stochastic games, even when the interaction is two-player and turn-based, and both the discount factor and the desired approximation of the learned policies is an absolute constant. In turn, these results stand in sharp contrast to single-agent reinforcement learning (RL) where near-optimal stationary Markov policies can be computationally efficiently learned. Complementing our intractability results for stationary Markov CCEs, we provide a decentralized algorithm (assuming shared randomness among players) for learning a nonstationary Markov CCE policy with polynomial time and sample complexity in all problem parameters. Previous work for learning Markov CCE policies all required exponential time and sample complexity in the number of players. In balance, our work advocates for the use of nonstationary Markov CCE policies as a computationally and statistically tractable solution concept in MARL, advancing an important and outstanding frontier in machine learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-daskalakis23a, title = {The Complexity of Markov Equilibrium in Stochastic Games}, author = {Daskalakis, Constantinos and Golowich, Noah and Zhang, Kaiqing}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {4180--4234}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/daskalakis23a/daskalakis23a.pdf}, url = {https://proceedings.mlr.press/v195/daskalakis23a.html}, abstract = {We show that computing approximate stationary Markov coarse correlated equilibria (CCE) in general-sum stochastic games is PPAD-hard, even when there are two players, the game is turn-based, the discount factor is an absolute constant, and the approximation is an absolute constant. Our intractability results stand in sharp contrast to the results in normal-form games, where exact CCEs are efficiently computable. A fortiori, our results imply that, in the setting of multi-agent reinforcement learning (MARL), it is computationally hard to learn stationary Markov CCE policies in stochastic games, even when the interaction is two-player and turn-based, and both the discount factor and the desired approximation of the learned policies is an absolute constant. In turn, these results stand in sharp contrast to single-agent reinforcement learning (RL) where near-optimal stationary Markov policies can be computationally efficiently learned. Complementing our intractability results for stationary Markov CCEs, we provide a decentralized algorithm (assuming shared randomness among players) for learning a nonstationary Markov CCE policy with polynomial time and sample complexity in all problem parameters. Previous work for learning Markov CCE policies all required exponential time and sample complexity in the number of players. In balance, our work advocates for the use of nonstationary Markov CCE policies as a computationally and statistically tractable solution concept in MARL, advancing an important and outstanding frontier in machine learning.} }
Endnote
%0 Conference Paper %T The Complexity of Markov Equilibrium in Stochastic Games %A Constantinos Daskalakis %A Noah Golowich %A Kaiqing Zhang %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-daskalakis23a %I PMLR %P 4180--4234 %U https://proceedings.mlr.press/v195/daskalakis23a.html %V 195 %X We show that computing approximate stationary Markov coarse correlated equilibria (CCE) in general-sum stochastic games is PPAD-hard, even when there are two players, the game is turn-based, the discount factor is an absolute constant, and the approximation is an absolute constant. Our intractability results stand in sharp contrast to the results in normal-form games, where exact CCEs are efficiently computable. A fortiori, our results imply that, in the setting of multi-agent reinforcement learning (MARL), it is computationally hard to learn stationary Markov CCE policies in stochastic games, even when the interaction is two-player and turn-based, and both the discount factor and the desired approximation of the learned policies is an absolute constant. In turn, these results stand in sharp contrast to single-agent reinforcement learning (RL) where near-optimal stationary Markov policies can be computationally efficiently learned. Complementing our intractability results for stationary Markov CCEs, we provide a decentralized algorithm (assuming shared randomness among players) for learning a nonstationary Markov CCE policy with polynomial time and sample complexity in all problem parameters. Previous work for learning Markov CCE policies all required exponential time and sample complexity in the number of players. In balance, our work advocates for the use of nonstationary Markov CCE policies as a computationally and statistically tractable solution concept in MARL, advancing an important and outstanding frontier in machine learning.
APA
Daskalakis, C., Golowich, N. & Zhang, K.. (2023). The Complexity of Markov Equilibrium in Stochastic Games. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:4180-4234 Available from https://proceedings.mlr.press/v195/daskalakis23a.html.

Related Material