[edit]
Fast and Furious Symmetric Learning in Zero-Sum Games: Gradient Descent as Fictitious Play
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:3527-3577, 2025.
Abstract
This paper investigates the sublinear regret gu rantees of two \textit{non}-no-regret algorithms in zero-sum games:Fictitious Play, and Online Gradient Descent with \textit{constant} stepsizes. In general adversarial online learning settings, both algorithms may exhibit instability and linear regret due to no regularization (Fictitious Play) or small amounts of regularization (Gradient Descent). However, their ability to obtain tighter regret bounds in two-player zero-sum games is less understood. In this work, we obtain strong new regret guarantees for both algorithms on a class of symmetric zero-sum games that generalize the classic three-strategy Rock-Paper-Scissors to a weighted, $n$-dimensional regime. Under \textit{symmetric initializations} of the players’ strategies, we prove that Fictitious Play with \textit{any tiebreaking rule} has $O(\sqrt{T})$ regret, establishing a new class of games for which Karlin’s Fictitious Play conjecture holds. Moreover, by leveraging a connection between the geometry of the iterates of Fictitious Play and Gradient Descent in the dual space of payoff vectors, we prove that Gradient Descent, for \textit{almost all} symmetric initializations, obtains a similar $O(\sqrt{T})$ regret bound when its stepsize is a \textit{sufficiently large} constant. For Gradient Descent, this establishes the first “fast and furious” behavior (i.e., sublinear regret \textit{without} time-vanishing stepsizes) for zero-sum games larger than $2\times2$.