Fictitious Play and Best-Response Dynamics in Identical Interest and Zero-Sum Stochastic Games

Lucas Baudin, Rida Laraki
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1664-1690, 2022.

Abstract

This paper proposes an extension of a popular decentralized discrete-time learning procedure when repeating a static game called fictitious play (FP) (Brown, 1951; Robinson, 1951) to a dynamic model called discounted stochastic game (Shapley, 1953). Our family of discrete-time FP procedures is proven to converge to the set of stationary Nash equilibria in identical interest discounted stochastic games. This extends similar convergence results for static games (Monderer & Shapley, 1996a). We then analyze the continuous-time counterpart of our FP procedures, which include as a particular case the best-response dynamic introduced and studied by Leslie et al. (2020) in the context of zero-sum stochastic games. We prove the converge of this dynamics to stationary Nash equilibria in identical-interest and zero-sum discounted stochastic games. Thanks to stochastic approximations, we can infer from the continuous-time convergence some discrete time results such as the convergence to stationary equilibria in zero-sum and team stochastic games (Holler, 2020).

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-baudin22a, title = {Fictitious Play and Best-Response Dynamics in Identical Interest and Zero-Sum Stochastic Games}, author = {Baudin, Lucas and Laraki, Rida}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {1664--1690}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/baudin22a/baudin22a.pdf}, url = {https://proceedings.mlr.press/v162/baudin22a.html}, abstract = {This paper proposes an extension of a popular decentralized discrete-time learning procedure when repeating a static game called fictitious play (FP) (Brown, 1951; Robinson, 1951) to a dynamic model called discounted stochastic game (Shapley, 1953). Our family of discrete-time FP procedures is proven to converge to the set of stationary Nash equilibria in identical interest discounted stochastic games. This extends similar convergence results for static games (Monderer & Shapley, 1996a). We then analyze the continuous-time counterpart of our FP procedures, which include as a particular case the best-response dynamic introduced and studied by Leslie et al. (2020) in the context of zero-sum stochastic games. We prove the converge of this dynamics to stationary Nash equilibria in identical-interest and zero-sum discounted stochastic games. Thanks to stochastic approximations, we can infer from the continuous-time convergence some discrete time results such as the convergence to stationary equilibria in zero-sum and team stochastic games (Holler, 2020).} }
Endnote
%0 Conference Paper %T Fictitious Play and Best-Response Dynamics in Identical Interest and Zero-Sum Stochastic Games %A Lucas Baudin %A Rida Laraki %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-baudin22a %I PMLR %P 1664--1690 %U https://proceedings.mlr.press/v162/baudin22a.html %V 162 %X This paper proposes an extension of a popular decentralized discrete-time learning procedure when repeating a static game called fictitious play (FP) (Brown, 1951; Robinson, 1951) to a dynamic model called discounted stochastic game (Shapley, 1953). Our family of discrete-time FP procedures is proven to converge to the set of stationary Nash equilibria in identical interest discounted stochastic games. This extends similar convergence results for static games (Monderer & Shapley, 1996a). We then analyze the continuous-time counterpart of our FP procedures, which include as a particular case the best-response dynamic introduced and studied by Leslie et al. (2020) in the context of zero-sum stochastic games. We prove the converge of this dynamics to stationary Nash equilibria in identical-interest and zero-sum discounted stochastic games. Thanks to stochastic approximations, we can infer from the continuous-time convergence some discrete time results such as the convergence to stationary equilibria in zero-sum and team stochastic games (Holler, 2020).
APA
Baudin, L. & Laraki, R.. (2022). Fictitious Play and Best-Response Dynamics in Identical Interest and Zero-Sum Stochastic Games. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:1664-1690 Available from https://proceedings.mlr.press/v162/baudin22a.html.

Related Material