The impact of uncertainty on regularized learning in games

Pierre-Louis Cauvin, Davide Legacci, Panayotis Mertikopoulos
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:6920-6957, 2025.

Abstract

In this paper, we investigate how randomness and uncertainty influence learning in games. Specifically, we examine a perturbed variant of the dynamics of “follow-the-regularized-leader” (FTRL), where the players’ payoff observations and strategy updates are continually impacted by random shocks. Our findings reveal that, in a fairly precise sense, “uncertainty favors extremes”: in any game, regardless of the noise level, every player’s trajectory of play reaches an arbitrarily small neighborhood of a pure strategy in finite time (which we estimate). Moreover, even if the player does not ultimately settle at this strategy, they return arbitrarily close to some (possibly different) pure strategy infinitely often. This prompts the question of which sets of pure strategies emerge as robust predictions of learning under uncertainty. We show that (a) the only possible limits of the FTRL dynamics under uncertainty are pure Nash equilibria; and (b) a span of pure strategies is stable and attracting if and only if it is closed under better replies. Finally, we turn to games where the deterministic dynamics are recurrent—such as zero-sum games with interior equilibria—and show that randomness disrupts this behavior, causing the stochastic dynamics to drift toward the boundary on average.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cauvin25a, title = {The impact of uncertainty on regularized learning in games}, author = {Cauvin, Pierre-Louis and Legacci, Davide and Mertikopoulos, Panayotis}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {6920--6957}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cauvin25a/cauvin25a.pdf}, url = {https://proceedings.mlr.press/v267/cauvin25a.html}, abstract = {In this paper, we investigate how randomness and uncertainty influence learning in games. Specifically, we examine a perturbed variant of the dynamics of “follow-the-regularized-leader” (FTRL), where the players’ payoff observations and strategy updates are continually impacted by random shocks. Our findings reveal that, in a fairly precise sense, “uncertainty favors extremes”: in any game, regardless of the noise level, every player’s trajectory of play reaches an arbitrarily small neighborhood of a pure strategy in finite time (which we estimate). Moreover, even if the player does not ultimately settle at this strategy, they return arbitrarily close to some (possibly different) pure strategy infinitely often. This prompts the question of which sets of pure strategies emerge as robust predictions of learning under uncertainty. We show that (a) the only possible limits of the FTRL dynamics under uncertainty are pure Nash equilibria; and (b) a span of pure strategies is stable and attracting if and only if it is closed under better replies. Finally, we turn to games where the deterministic dynamics are recurrent—such as zero-sum games with interior equilibria—and show that randomness disrupts this behavior, causing the stochastic dynamics to drift toward the boundary on average.} }
Endnote
%0 Conference Paper %T The impact of uncertainty on regularized learning in games %A Pierre-Louis Cauvin %A Davide Legacci %A Panayotis Mertikopoulos %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cauvin25a %I PMLR %P 6920--6957 %U https://proceedings.mlr.press/v267/cauvin25a.html %V 267 %X In this paper, we investigate how randomness and uncertainty influence learning in games. Specifically, we examine a perturbed variant of the dynamics of “follow-the-regularized-leader” (FTRL), where the players’ payoff observations and strategy updates are continually impacted by random shocks. Our findings reveal that, in a fairly precise sense, “uncertainty favors extremes”: in any game, regardless of the noise level, every player’s trajectory of play reaches an arbitrarily small neighborhood of a pure strategy in finite time (which we estimate). Moreover, even if the player does not ultimately settle at this strategy, they return arbitrarily close to some (possibly different) pure strategy infinitely often. This prompts the question of which sets of pure strategies emerge as robust predictions of learning under uncertainty. We show that (a) the only possible limits of the FTRL dynamics under uncertainty are pure Nash equilibria; and (b) a span of pure strategies is stable and attracting if and only if it is closed under better replies. Finally, we turn to games where the deterministic dynamics are recurrent—such as zero-sum games with interior equilibria—and show that randomness disrupts this behavior, causing the stochastic dynamics to drift toward the boundary on average.
APA
Cauvin, P., Legacci, D. & Mertikopoulos, P.. (2025). The impact of uncertainty on regularized learning in games. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:6920-6957 Available from https://proceedings.mlr.press/v267/cauvin25a.html.

Related Material