Finite-Time Convergence Rates in Stochastic Stackelberg Games with Smooth Algorithmic Agents

Eric Frankel, Kshitij Kulkarni, Dmitriy Drusvyatskiy, Sewoong Oh, Lillian J. Ratliff
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:17575-17656, 2025.

Abstract

Decision-makers often adaptively influence downstream competitive agents’ behavior to minimize their cost, yet in doing so face critical challenges: $(i)$ decision-makers might not a priori know the agents’ objectives; $(ii)$ agents might learn their responses, introducing stochasticity and non-stationarity into the decision-making process; and $(iii)$ there may be additional non-strategic environmental stochasticity. Characterizing convergence of this complex system is contingent on how the decision-maker controls for the tradeoff between the induced drift and additional noise from the learning agent behavior and environmental stochasticity. To understand how the learning agents’ behavior is influenced by the decision-maker’s actions, we first consider a decision-maker that deploys an arbitrary sequence of actions which induces a sequence of games and corresponding equilibria. We characterize how the drift and noise in the agents’ stochastic algorithms decouples from their optimization error. Leveraging this decoupling and accompanying finite-time efficiency estimates, we design decision-maker algorithms that control the induced drift relative to the agent noise. This enables efficient finite-time tracking of game theoretic equilibrium concepts that adhere to the incentives of the players’ collective learning processes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-frankel25b, title = {Finite-Time Convergence Rates in Stochastic Stackelberg Games with Smooth Algorithmic Agents}, author = {Frankel, Eric and Kulkarni, Kshitij and Drusvyatskiy, Dmitriy and Oh, Sewoong and Ratliff, Lillian J.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {17575--17656}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/frankel25b/frankel25b.pdf}, url = {https://proceedings.mlr.press/v267/frankel25b.html}, abstract = {Decision-makers often adaptively influence downstream competitive agents’ behavior to minimize their cost, yet in doing so face critical challenges: $(i)$ decision-makers might not a priori know the agents’ objectives; $(ii)$ agents might learn their responses, introducing stochasticity and non-stationarity into the decision-making process; and $(iii)$ there may be additional non-strategic environmental stochasticity. Characterizing convergence of this complex system is contingent on how the decision-maker controls for the tradeoff between the induced drift and additional noise from the learning agent behavior and environmental stochasticity. To understand how the learning agents’ behavior is influenced by the decision-maker’s actions, we first consider a decision-maker that deploys an arbitrary sequence of actions which induces a sequence of games and corresponding equilibria. We characterize how the drift and noise in the agents’ stochastic algorithms decouples from their optimization error. Leveraging this decoupling and accompanying finite-time efficiency estimates, we design decision-maker algorithms that control the induced drift relative to the agent noise. This enables efficient finite-time tracking of game theoretic equilibrium concepts that adhere to the incentives of the players’ collective learning processes.} }
Endnote
%0 Conference Paper %T Finite-Time Convergence Rates in Stochastic Stackelberg Games with Smooth Algorithmic Agents %A Eric Frankel %A Kshitij Kulkarni %A Dmitriy Drusvyatskiy %A Sewoong Oh %A Lillian J. Ratliff %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-frankel25b %I PMLR %P 17575--17656 %U https://proceedings.mlr.press/v267/frankel25b.html %V 267 %X Decision-makers often adaptively influence downstream competitive agents’ behavior to minimize their cost, yet in doing so face critical challenges: $(i)$ decision-makers might not a priori know the agents’ objectives; $(ii)$ agents might learn their responses, introducing stochasticity and non-stationarity into the decision-making process; and $(iii)$ there may be additional non-strategic environmental stochasticity. Characterizing convergence of this complex system is contingent on how the decision-maker controls for the tradeoff between the induced drift and additional noise from the learning agent behavior and environmental stochasticity. To understand how the learning agents’ behavior is influenced by the decision-maker’s actions, we first consider a decision-maker that deploys an arbitrary sequence of actions which induces a sequence of games and corresponding equilibria. We characterize how the drift and noise in the agents’ stochastic algorithms decouples from their optimization error. Leveraging this decoupling and accompanying finite-time efficiency estimates, we design decision-maker algorithms that control the induced drift relative to the agent noise. This enables efficient finite-time tracking of game theoretic equilibrium concepts that adhere to the incentives of the players’ collective learning processes.
APA
Frankel, E., Kulkarni, K., Drusvyatskiy, D., Oh, S. & Ratliff, L.J.. (2025). Finite-Time Convergence Rates in Stochastic Stackelberg Games with Smooth Algorithmic Agents. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:17575-17656 Available from https://proceedings.mlr.press/v267/frankel25b.html.

Related Material