Responding to Promises: No-regret learning against followers with memory

Vijeth Hebbar, Cedric Langbort
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:647-659, 2025.

Abstract

We consider a repeated Stackelberg game setup where the leader faces a sequence of followers of unknown types and must learn what commitments to make. While previous works have considered followers that best respond to the commitment announced by the leader in every round, we relax this setup in two ways. Motivated by natural scenarios where the leader’s reputation factors into how the followers choose their response, we consider followers with memory. Specifically, we model followers that base their response on not just the leader’s current commitment but on an aggregate of their past commitments. In developing learning strategies that the leader can employ against such followers, we make the second relaxation and assume boundedly rational followers – in particular – quantal responding followers. Interestingly, we observe that the smoothness property offered by the quantal response (QR) model helps in addressing the challenge posed by learning against followers with memory. Utilizing techniques from online learning, we develop algorithms that guarantee $O(\sqrt{T})$ regret for quantal responding memory-less followers and $O(\sqrt{BT})$ regret for followers with bounded memory of length $B$ with both scaling polynomially in game parameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-hebbar25a, title = {Responding to Promises: No-regret learning against followers with memory}, author = {Hebbar, Vijeth and Langbort, Cedric}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {647--659}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/hebbar25a/hebbar25a.pdf}, url = {https://proceedings.mlr.press/v283/hebbar25a.html}, abstract = {We consider a repeated Stackelberg game setup where the leader faces a sequence of followers of unknown types and must learn what commitments to make. While previous works have considered followers that best respond to the commitment announced by the leader in every round, we relax this setup in two ways. Motivated by natural scenarios where the leader’s reputation factors into how the followers choose their response, we consider followers with memory. Specifically, we model followers that base their response on not just the leader’s current commitment but on an aggregate of their past commitments. In developing learning strategies that the leader can employ against such followers, we make the second relaxation and assume boundedly rational followers – in particular – quantal responding followers. Interestingly, we observe that the smoothness property offered by the quantal response (QR) model helps in addressing the challenge posed by learning against followers with memory. Utilizing techniques from online learning, we develop algorithms that guarantee $O(\sqrt{T})$ regret for quantal responding memory-less followers and $O(\sqrt{BT})$ regret for followers with bounded memory of length $B$ with both scaling polynomially in game parameters.} }
Endnote
%0 Conference Paper %T Responding to Promises: No-regret learning against followers with memory %A Vijeth Hebbar %A Cedric Langbort %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-hebbar25a %I PMLR %P 647--659 %U https://proceedings.mlr.press/v283/hebbar25a.html %V 283 %X We consider a repeated Stackelberg game setup where the leader faces a sequence of followers of unknown types and must learn what commitments to make. While previous works have considered followers that best respond to the commitment announced by the leader in every round, we relax this setup in two ways. Motivated by natural scenarios where the leader’s reputation factors into how the followers choose their response, we consider followers with memory. Specifically, we model followers that base their response on not just the leader’s current commitment but on an aggregate of their past commitments. In developing learning strategies that the leader can employ against such followers, we make the second relaxation and assume boundedly rational followers – in particular – quantal responding followers. Interestingly, we observe that the smoothness property offered by the quantal response (QR) model helps in addressing the challenge posed by learning against followers with memory. Utilizing techniques from online learning, we develop algorithms that guarantee $O(\sqrt{T})$ regret for quantal responding memory-less followers and $O(\sqrt{BT})$ regret for followers with bounded memory of length $B$ with both scaling polynomially in game parameters.
APA
Hebbar, V. & Langbort, C.. (2025). Responding to Promises: No-regret learning against followers with memory. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:647-659 Available from https://proceedings.mlr.press/v283/hebbar25a.html.

Related Material