Reinforcement Learning for Mean Field Games with Strategic Complementarities

Kiyeob Lee, Desik Rengarajan, Dileep Kalathil, Srinivas Shakkottai
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2458-2466, 2021.

Abstract

Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are unknown in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-lee21b, title = { Reinforcement Learning for Mean Field Games with Strategic Complementarities }, author = {Lee, Kiyeob and Rengarajan, Desik and Kalathil, Dileep and Shakkottai, Srinivas}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2458--2466}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/lee21b/lee21b.pdf}, url = {https://proceedings.mlr.press/v130/lee21b.html}, abstract = { Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are unknown in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications. } }
Endnote
%0 Conference Paper %T Reinforcement Learning for Mean Field Games with Strategic Complementarities %A Kiyeob Lee %A Desik Rengarajan %A Dileep Kalathil %A Srinivas Shakkottai %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-lee21b %I PMLR %P 2458--2466 %U https://proceedings.mlr.press/v130/lee21b.html %V 130 %X Mean Field Games (MFG) are the class of games with a very large number of agents and the standard equilibrium concept is a Mean Field Equilibrium (MFE). Algorithms for learning MFE in dynamic MFGs are unknown in general. Our focus is on an important subclass that possess a monotonicity property called Strategic Complementarities (MFG-SC). We introduce a natural refinement to the equilibrium concept that we call Trembling-Hand-Perfect MFE (T-MFE), which allows agents to employ a measure of randomization while accounting for the impact of such randomization on their payoffs. We propose a simple algorithm for computing T-MFE under a known model. We also introduce a model-free and a model-based approach to learning T-MFE and provide sample complexities of both algorithms. We also develop a fully online learning scheme that obviates the need for a simulator. Finally, we empirically evaluate the performance of the proposed algorithms via examples motivated by real-world applications.
APA
Lee, K., Rengarajan, D., Kalathil, D. & Shakkottai, S.. (2021). Reinforcement Learning for Mean Field Games with Strategic Complementarities . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2458-2466 Available from https://proceedings.mlr.press/v130/lee21b.html.

Related Material