[edit]
Learning $\epsilon$-Nash equilibrium stationary policies in stochastic games with unknown independent chains using online mirror descent
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:784-795, 2024.
Abstract
We study a subclass of n-player stochastic games, namely, stochastic games with independent chains and unknown transition matrices. In this class of games, players control their own internal Markov chains whose transitions do not depend on the states/actions of other players. However, players’ decisions are coupled through their payoff functions. We assume players can receive only realizations of their payoffs, and that the players can not observe the states and actions of other players, nor do they know the transition probability matrices of their own Markov chain. Relying on a compact dual formulation of the game based on occupancy measures and the technique of confidence set to maintain high-probability estimates of the unknown transition matrices, we propose a fully decentralized mirror descent algorithm to learn an $\epsilon$-Nash equilibrium stationary policy for this class of games. The proposed algorithm has the desired properties of independence and convergence. Specifically, assuming the existence of a variationally stable Nash equilibrium policy, we show that the proposed algorithm in which players make their decisions independently and in a decentralized fashion converges asymptotically to the stable $\epsilon$-Nash equilibrium stationary policy with arbitrarily high probability.