Learning $\epsilon$-Nash equilibrium stationary policies in stochastic games with unknown independent chains using online mirror descent

Tiancheng Qin, S. Rasoul Etesami
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:784-795, 2024.

Abstract

We study a subclass of n-player stochastic games, namely, stochastic games with independent chains and unknown transition matrices. In this class of games, players control their own internal Markov chains whose transitions do not depend on the states/actions of other players. However, players’ decisions are coupled through their payoff functions. We assume players can receive only realizations of their payoffs, and that the players can not observe the states and actions of other players, nor do they know the transition probability matrices of their own Markov chain. Relying on a compact dual formulation of the game based on occupancy measures and the technique of confidence set to maintain high-probability estimates of the unknown transition matrices, we propose a fully decentralized mirror descent algorithm to learn an $\epsilon$-Nash equilibrium stationary policy for this class of games. The proposed algorithm has the desired properties of independence and convergence. Specifically, assuming the existence of a variationally stable Nash equilibrium policy, we show that the proposed algorithm in which players make their decisions independently and in a decentralized fashion converges asymptotically to the stable $\epsilon$-Nash equilibrium stationary policy with arbitrarily high probability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-qin24a, title = {Learning $\epsilon$-{N}ash equilibrium stationary policies in stochastic games with unknown independent chains using online mirror descent}, author = {Qin, Tiancheng and Etesami, S. Rasoul}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {784--795}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/qin24a/qin24a.pdf}, url = {https://proceedings.mlr.press/v242/qin24a.html}, abstract = {We study a subclass of n-player stochastic games, namely, stochastic games with independent chains and unknown transition matrices. In this class of games, players control their own internal Markov chains whose transitions do not depend on the states/actions of other players. However, players’ decisions are coupled through their payoff functions. We assume players can receive only realizations of their payoffs, and that the players can not observe the states and actions of other players, nor do they know the transition probability matrices of their own Markov chain. Relying on a compact dual formulation of the game based on occupancy measures and the technique of confidence set to maintain high-probability estimates of the unknown transition matrices, we propose a fully decentralized mirror descent algorithm to learn an $\epsilon$-Nash equilibrium stationary policy for this class of games. The proposed algorithm has the desired properties of independence and convergence. Specifically, assuming the existence of a variationally stable Nash equilibrium policy, we show that the proposed algorithm in which players make their decisions independently and in a decentralized fashion converges asymptotically to the stable $\epsilon$-Nash equilibrium stationary policy with arbitrarily high probability.} }
Endnote
%0 Conference Paper %T Learning $\epsilon$-Nash equilibrium stationary policies in stochastic games with unknown independent chains using online mirror descent %A Tiancheng Qin %A S. Rasoul Etesami %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-qin24a %I PMLR %P 784--795 %U https://proceedings.mlr.press/v242/qin24a.html %V 242 %X We study a subclass of n-player stochastic games, namely, stochastic games with independent chains and unknown transition matrices. In this class of games, players control their own internal Markov chains whose transitions do not depend on the states/actions of other players. However, players’ decisions are coupled through their payoff functions. We assume players can receive only realizations of their payoffs, and that the players can not observe the states and actions of other players, nor do they know the transition probability matrices of their own Markov chain. Relying on a compact dual formulation of the game based on occupancy measures and the technique of confidence set to maintain high-probability estimates of the unknown transition matrices, we propose a fully decentralized mirror descent algorithm to learn an $\epsilon$-Nash equilibrium stationary policy for this class of games. The proposed algorithm has the desired properties of independence and convergence. Specifically, assuming the existence of a variationally stable Nash equilibrium policy, we show that the proposed algorithm in which players make their decisions independently and in a decentralized fashion converges asymptotically to the stable $\epsilon$-Nash equilibrium stationary policy with arbitrarily high probability.
APA
Qin, T. & Etesami, S.R.. (2024). Learning $\epsilon$-Nash equilibrium stationary policies in stochastic games with unknown independent chains using online mirror descent. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:784-795 Available from https://proceedings.mlr.press/v242/qin24a.html.

Related Material