A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games

Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Tong Zhang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:24496-24523, 2022.

Abstract

Existing studies on provably efficient algorithms for Markov games (MGs) almost exclusively build on the “optimism in the face of uncertainty” (OFU) principle. This work focuses on a distinct approach of posterior sampling, which is celebrated in many bandits and reinforcement learning settings but remains under-explored for MGs. Specifically, for episodic two-player zero-sum MGs, a novel posterior sampling algorithm is developed with general function approximation. Theoretical analysis demonstrates that the posterior sampling algorithm admits a $\sqrt{T}$-regret bound for problems with a low multi-agent decoupling coefficient, which is a new complexity measure for MGs, where $T$ denotes the number of episodes. When specializing to linear MGs, the obtained regret bound matches the state-of-the-art results. To the best of our knowledge, this is the first provably efficient posterior sampling algorithm for MGs with frequentist regret guarantees, which extends the toolbox for MGs and promotes the broad applicability of posterior sampling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-xiong22b, title = {A Self-Play Posterior Sampling Algorithm for Zero-Sum {M}arkov Games}, author = {Xiong, Wei and Zhong, Han and Shi, Chengshuai and Shen, Cong and Zhang, Tong}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {24496--24523}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/xiong22b/xiong22b.pdf}, url = {https://proceedings.mlr.press/v162/xiong22b.html}, abstract = {Existing studies on provably efficient algorithms for Markov games (MGs) almost exclusively build on the “optimism in the face of uncertainty” (OFU) principle. This work focuses on a distinct approach of posterior sampling, which is celebrated in many bandits and reinforcement learning settings but remains under-explored for MGs. Specifically, for episodic two-player zero-sum MGs, a novel posterior sampling algorithm is developed with general function approximation. Theoretical analysis demonstrates that the posterior sampling algorithm admits a $\sqrt{T}$-regret bound for problems with a low multi-agent decoupling coefficient, which is a new complexity measure for MGs, where $T$ denotes the number of episodes. When specializing to linear MGs, the obtained regret bound matches the state-of-the-art results. To the best of our knowledge, this is the first provably efficient posterior sampling algorithm for MGs with frequentist regret guarantees, which extends the toolbox for MGs and promotes the broad applicability of posterior sampling.} }
Endnote
%0 Conference Paper %T A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games %A Wei Xiong %A Han Zhong %A Chengshuai Shi %A Cong Shen %A Tong Zhang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-xiong22b %I PMLR %P 24496--24523 %U https://proceedings.mlr.press/v162/xiong22b.html %V 162 %X Existing studies on provably efficient algorithms for Markov games (MGs) almost exclusively build on the “optimism in the face of uncertainty” (OFU) principle. This work focuses on a distinct approach of posterior sampling, which is celebrated in many bandits and reinforcement learning settings but remains under-explored for MGs. Specifically, for episodic two-player zero-sum MGs, a novel posterior sampling algorithm is developed with general function approximation. Theoretical analysis demonstrates that the posterior sampling algorithm admits a $\sqrt{T}$-regret bound for problems with a low multi-agent decoupling coefficient, which is a new complexity measure for MGs, where $T$ denotes the number of episodes. When specializing to linear MGs, the obtained regret bound matches the state-of-the-art results. To the best of our knowledge, this is the first provably efficient posterior sampling algorithm for MGs with frequentist regret guarantees, which extends the toolbox for MGs and promotes the broad applicability of posterior sampling.
APA
Xiong, W., Zhong, H., Shi, C., Shen, C. & Zhang, T.. (2022). A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:24496-24523 Available from https://proceedings.mlr.press/v162/xiong22b.html.

Related Material