[edit]
Adapting to game trees in zero-sum imperfect information games
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:10093-10135, 2023.
Abstract
Imperfect information games (IIG) are games in which each player only partially observes the current game state. We study how to learn ϵ-optimal strategies in a zero-sum IIG through self-play with trajectory feedback. We give a problem-independent lower bound ˜O(H(AX+BY)/ϵ2) on the required number of realizations to learn these strategies with high probability, where H is the length of the game, AX and BY are the total number of actions for the two players. We also propose two Follow the Regularized leader (FTRL) algorithms for this setting: Balanced FTRL which matches this lower bound, but requires the knowledge of the information set structure beforehand to define the regularization; and Adaptive FTRL which needs ˜O(H2(AX+BY)/ϵ2) realizations without this requirement by progressively adapting the regularization to the observations.