Fast and Furious Symmetric Learning in Zero-Sum Games: Gradient Descent as Fictitious Play

John Lazarsfeld, Georgios Piliouras, Ryann Sim, Andre Wibisono
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:3527-3577, 2025.

Abstract

This paper investigates the sublinear regret gu rantees of two \textit{non}-no-regret algorithms in zero-sum games:Fictitious Play, and Online Gradient Descent with \textit{constant} stepsizes. In general adversarial online learning settings, both algorithms may exhibit instability and linear regret due to no regularization (Fictitious Play) or small amounts of regularization (Gradient Descent). However, their ability to obtain tighter regret bounds in two-player zero-sum games is less understood. In this work, we obtain strong new regret guarantees for both algorithms on a class of symmetric zero-sum games that generalize the classic three-strategy Rock-Paper-Scissors to a weighted, $n$-dimensional regime. Under \textit{symmetric initializations} of the players’ strategies, we prove that Fictitious Play with \textit{any tiebreaking rule} has $O(\sqrt{T})$ regret, establishing a new class of games for which Karlin’s Fictitious Play conjecture holds. Moreover, by leveraging a connection between the geometry of the iterates of Fictitious Play and Gradient Descent in the dual space of payoff vectors, we prove that Gradient Descent, for \textit{almost all} symmetric initializations, obtains a similar $O(\sqrt{T})$ regret bound when its stepsize is a \textit{sufficiently large} constant. For Gradient Descent, this establishes the first “fast and furious” behavior (i.e., sublinear regret \textit{without} time-vanishing stepsizes) for zero-sum games larger than $2\times2$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-lazarsfeld25a, title = {Fast and Furious Symmetric Learning in Zero-Sum Games: Gradient Descent as Fictitious Play}, author = {Lazarsfeld, John and Piliouras, Georgios and Sim, Ryann and Wibisono, Andre}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {3527--3577}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/lazarsfeld25a/lazarsfeld25a.pdf}, url = {https://proceedings.mlr.press/v291/lazarsfeld25a.html}, abstract = {This paper investigates the sublinear regret gu rantees of two \textit{non}-no-regret algorithms in zero-sum games:Fictitious Play, and Online Gradient Descent with \textit{constant} stepsizes. In general adversarial online learning settings, both algorithms may exhibit instability and linear regret due to no regularization (Fictitious Play) or small amounts of regularization (Gradient Descent). However, their ability to obtain tighter regret bounds in two-player zero-sum games is less understood. In this work, we obtain strong new regret guarantees for both algorithms on a class of symmetric zero-sum games that generalize the classic three-strategy Rock-Paper-Scissors to a weighted, $n$-dimensional regime. Under \textit{symmetric initializations} of the players’ strategies, we prove that Fictitious Play with \textit{any tiebreaking rule} has $O(\sqrt{T})$ regret, establishing a new class of games for which Karlin’s Fictitious Play conjecture holds. Moreover, by leveraging a connection between the geometry of the iterates of Fictitious Play and Gradient Descent in the dual space of payoff vectors, we prove that Gradient Descent, for \textit{almost all} symmetric initializations, obtains a similar $O(\sqrt{T})$ regret bound when its stepsize is a \textit{sufficiently large} constant. For Gradient Descent, this establishes the first “fast and furious” behavior (i.e., sublinear regret \textit{without} time-vanishing stepsizes) for zero-sum games larger than $2\times2$.} }
Endnote
%0 Conference Paper %T Fast and Furious Symmetric Learning in Zero-Sum Games: Gradient Descent as Fictitious Play %A John Lazarsfeld %A Georgios Piliouras %A Ryann Sim %A Andre Wibisono %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-lazarsfeld25a %I PMLR %P 3527--3577 %U https://proceedings.mlr.press/v291/lazarsfeld25a.html %V 291 %X This paper investigates the sublinear regret gu rantees of two \textit{non}-no-regret algorithms in zero-sum games:Fictitious Play, and Online Gradient Descent with \textit{constant} stepsizes. In general adversarial online learning settings, both algorithms may exhibit instability and linear regret due to no regularization (Fictitious Play) or small amounts of regularization (Gradient Descent). However, their ability to obtain tighter regret bounds in two-player zero-sum games is less understood. In this work, we obtain strong new regret guarantees for both algorithms on a class of symmetric zero-sum games that generalize the classic three-strategy Rock-Paper-Scissors to a weighted, $n$-dimensional regime. Under \textit{symmetric initializations} of the players’ strategies, we prove that Fictitious Play with \textit{any tiebreaking rule} has $O(\sqrt{T})$ regret, establishing a new class of games for which Karlin’s Fictitious Play conjecture holds. Moreover, by leveraging a connection between the geometry of the iterates of Fictitious Play and Gradient Descent in the dual space of payoff vectors, we prove that Gradient Descent, for \textit{almost all} symmetric initializations, obtains a similar $O(\sqrt{T})$ regret bound when its stepsize is a \textit{sufficiently large} constant. For Gradient Descent, this establishes the first “fast and furious” behavior (i.e., sublinear regret \textit{without} time-vanishing stepsizes) for zero-sum games larger than $2\times2$.
APA
Lazarsfeld, J., Piliouras, G., Sim, R. & Wibisono, A.. (2025). Fast and Furious Symmetric Learning in Zero-Sum Games: Gradient Descent as Fictitious Play. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:3527-3577 Available from https://proceedings.mlr.press/v291/lazarsfeld25a.html.

Related Material