Learning and steering game dynamics towards desirable outcomes

Ilayda Canyakmaz, Iosif Sakos, Wayne Lin, Antonios Varvitsiotis, Georgios Piliouras
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:1512-1524, 2025.

Abstract

Game dynamics, which describe how agents’ strategies evolve over time based on past interactions, can exhibit a variety of undesirable behaviours including convergence to suboptimal equilibria, cycling, and chaos. While central planners can employ incentives to mitigate such behaviors and steer game dynamics towards desirable outcomes, the effectiveness of such interventions critically relies on accurately predicting agents’ responses to these incentives—a task made particularly challenging when the underlying dynamics are unknown and observations are limited. To address this challenge, this work introduces the Side Information Assisted Regression with Model Predictive Control (SIAR-MPC) framework. We extend the recently introduced SIAR method to incorporate the effect of control, enabling it to utilize side-information constraints inherent to game-theoretic applications to model agents’ responses to incentives from scarce data. MPC then leverages this model to implement dynamic incentive adjustments. Our experiments demonstrate the effectiveness of SIAR-MPC in guiding systems towards socially optimal equilibria, stabilizing chaotic and cycling behaviors. Notably, it achieves these results in data-scarce settings of few learning samples, where well-known system identification methods paired with MPC show less effective results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-canyakmaz25a, title = {Learning and steering game dynamics towards desirable outcomes}, author = {Canyakmaz, Ilayda and Sakos, Iosif and Lin, Wayne and Varvitsiotis, Antonios and Piliouras, Georgios}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {1512--1524}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/canyakmaz25a/canyakmaz25a.pdf}, url = {https://proceedings.mlr.press/v283/canyakmaz25a.html}, abstract = {Game dynamics, which describe how agents’ strategies evolve over time based on past interactions, can exhibit a variety of undesirable behaviours including convergence to suboptimal equilibria, cycling, and chaos. While central planners can employ incentives to mitigate such behaviors and steer game dynamics towards desirable outcomes, the effectiveness of such interventions critically relies on accurately predicting agents’ responses to these incentives—a task made particularly challenging when the underlying dynamics are unknown and observations are limited. To address this challenge, this work introduces the Side Information Assisted Regression with Model Predictive Control (SIAR-MPC) framework. We extend the recently introduced SIAR method to incorporate the effect of control, enabling it to utilize side-information constraints inherent to game-theoretic applications to model agents’ responses to incentives from scarce data. MPC then leverages this model to implement dynamic incentive adjustments. Our experiments demonstrate the effectiveness of SIAR-MPC in guiding systems towards socially optimal equilibria, stabilizing chaotic and cycling behaviors. Notably, it achieves these results in data-scarce settings of few learning samples, where well-known system identification methods paired with MPC show less effective results.} }
Endnote
%0 Conference Paper %T Learning and steering game dynamics towards desirable outcomes %A Ilayda Canyakmaz %A Iosif Sakos %A Wayne Lin %A Antonios Varvitsiotis %A Georgios Piliouras %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-canyakmaz25a %I PMLR %P 1512--1524 %U https://proceedings.mlr.press/v283/canyakmaz25a.html %V 283 %X Game dynamics, which describe how agents’ strategies evolve over time based on past interactions, can exhibit a variety of undesirable behaviours including convergence to suboptimal equilibria, cycling, and chaos. While central planners can employ incentives to mitigate such behaviors and steer game dynamics towards desirable outcomes, the effectiveness of such interventions critically relies on accurately predicting agents’ responses to these incentives—a task made particularly challenging when the underlying dynamics are unknown and observations are limited. To address this challenge, this work introduces the Side Information Assisted Regression with Model Predictive Control (SIAR-MPC) framework. We extend the recently introduced SIAR method to incorporate the effect of control, enabling it to utilize side-information constraints inherent to game-theoretic applications to model agents’ responses to incentives from scarce data. MPC then leverages this model to implement dynamic incentive adjustments. Our experiments demonstrate the effectiveness of SIAR-MPC in guiding systems towards socially optimal equilibria, stabilizing chaotic and cycling behaviors. Notably, it achieves these results in data-scarce settings of few learning samples, where well-known system identification methods paired with MPC show less effective results.
APA
Canyakmaz, I., Sakos, I., Lin, W., Varvitsiotis, A. & Piliouras, G.. (2025). Learning and steering game dynamics towards desirable outcomes. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:1512-1524 Available from https://proceedings.mlr.press/v283/canyakmaz25a.html.

Related Material