Learning in Multi-Player Stochastic Games

William Brown
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1927-1937, 2021.

Abstract

We consider the problem of simultaneous learning in stochastic games with many players in the finite-horizon setting. While the typical target solution for a stochastic game is a Nash equilibrium, this is intractable with many players. We instead focus on variants of correlated equilibria, such as those studied for extensive-form games. We begin with a hardness result for the adversarial MDP problem: even for a horizon of 3, obtaining sublinear regret against the best non-stationary policy is NP-hard when both rewards and transitions are adversarial. This implies that convergence to even the weakest natural solution concept—normal-form coarse correlated equilibrium—is not possible via black-box reduction to a no-regret algorithm even in stochastic games with constant horizon (unless $NP\subseteqBPP$). Instead, we turn to a different target: algorithms which generate an equilibrium when they are used by all players. Our main result is algorithm which generates an extensive-form correlated equilibrium, whose runtime is exponential in the horizon but polynomial in all other parameters. We give a similar algorithm which is polynomial in all parameters for “fast-mixing” stochastic games. We also show a method for efficiently reaching normal-form coarse correlated equilibria in “single-controller” stochastic games which follows the traditional no-regret approach. When shared randomness is available, the two generative algorithms can be extended to give simultaneous regret bounds and converge in the traditional sense.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-brown21a, title = {Learning in Multi-Player Stochastic Games}, author = {Brown, William}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1927--1937}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/brown21a/brown21a.pdf}, url = {https://proceedings.mlr.press/v161/brown21a.html}, abstract = {We consider the problem of simultaneous learning in stochastic games with many players in the finite-horizon setting. While the typical target solution for a stochastic game is a Nash equilibrium, this is intractable with many players. We instead focus on variants of correlated equilibria, such as those studied for extensive-form games. We begin with a hardness result for the adversarial MDP problem: even for a horizon of 3, obtaining sublinear regret against the best non-stationary policy is NP-hard when both rewards and transitions are adversarial. This implies that convergence to even the weakest natural solution concept—normal-form coarse correlated equilibrium—is not possible via black-box reduction to a no-regret algorithm even in stochastic games with constant horizon (unless $NP\subseteqBPP$). Instead, we turn to a different target: algorithms which generate an equilibrium when they are used by all players. Our main result is algorithm which generates an extensive-form correlated equilibrium, whose runtime is exponential in the horizon but polynomial in all other parameters. We give a similar algorithm which is polynomial in all parameters for “fast-mixing” stochastic games. We also show a method for efficiently reaching normal-form coarse correlated equilibria in “single-controller” stochastic games which follows the traditional no-regret approach. When shared randomness is available, the two generative algorithms can be extended to give simultaneous regret bounds and converge in the traditional sense.} }
Endnote
%0 Conference Paper %T Learning in Multi-Player Stochastic Games %A William Brown %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-brown21a %I PMLR %P 1927--1937 %U https://proceedings.mlr.press/v161/brown21a.html %V 161 %X We consider the problem of simultaneous learning in stochastic games with many players in the finite-horizon setting. While the typical target solution for a stochastic game is a Nash equilibrium, this is intractable with many players. We instead focus on variants of correlated equilibria, such as those studied for extensive-form games. We begin with a hardness result for the adversarial MDP problem: even for a horizon of 3, obtaining sublinear regret against the best non-stationary policy is NP-hard when both rewards and transitions are adversarial. This implies that convergence to even the weakest natural solution concept—normal-form coarse correlated equilibrium—is not possible via black-box reduction to a no-regret algorithm even in stochastic games with constant horizon (unless $NP\subseteqBPP$). Instead, we turn to a different target: algorithms which generate an equilibrium when they are used by all players. Our main result is algorithm which generates an extensive-form correlated equilibrium, whose runtime is exponential in the horizon but polynomial in all other parameters. We give a similar algorithm which is polynomial in all parameters for “fast-mixing” stochastic games. We also show a method for efficiently reaching normal-form coarse correlated equilibria in “single-controller” stochastic games which follows the traditional no-regret approach. When shared randomness is available, the two generative algorithms can be extended to give simultaneous regret bounds and converge in the traditional sense.
APA
Brown, W.. (2021). Learning in Multi-Player Stochastic Games. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1927-1937 Available from https://proceedings.mlr.press/v161/brown21a.html.

Related Material