Scalable Deep Reinforcement Learning Algorithms for Mean Field Games

Mathieu Lauriere, Sarah Perrin, Sertan Girgin, Paul Muller, Ayush Jain, Theophile Cabannes, Georgios Piliouras, Julien Perolat, Romuald Elie, Olivier Pietquin, Matthieu Geist
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:12078-12095, 2022.

Abstract

Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents. Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods. One limiting factor to further scale up using RL is that existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values. This is far from being trivial in the case of non-linear function approximation that enjoy good generalization properties, e.g. neural networks. We propose two methods to address this shortcoming. The first one learns a mixed strategy from distillation of historical data into a neural network and is applied to the Fictitious Play algorithm. The second one is an online mixing method based on regularization that does not require memorizing historical data or previous estimates. It is used to extend Online Mirror Descent. We demonstrate numerically that these methods efficiently enable the use of Deep RL algorithms to solve various MFGs. In addition, we show that these methods outperform SotA baselines from the literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-lauriere22a, title = {Scalable Deep Reinforcement Learning Algorithms for Mean Field Games}, author = {Lauriere, Mathieu and Perrin, Sarah and Girgin, Sertan and Muller, Paul and Jain, Ayush and Cabannes, Theophile and Piliouras, Georgios and Perolat, Julien and Elie, Romuald and Pietquin, Olivier and Geist, Matthieu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {12078--12095}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/lauriere22a/lauriere22a.pdf}, url = {https://proceedings.mlr.press/v162/lauriere22a.html}, abstract = {Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents. Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods. One limiting factor to further scale up using RL is that existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values. This is far from being trivial in the case of non-linear function approximation that enjoy good generalization properties, e.g. neural networks. We propose two methods to address this shortcoming. The first one learns a mixed strategy from distillation of historical data into a neural network and is applied to the Fictitious Play algorithm. The second one is an online mixing method based on regularization that does not require memorizing historical data or previous estimates. It is used to extend Online Mirror Descent. We demonstrate numerically that these methods efficiently enable the use of Deep RL algorithms to solve various MFGs. In addition, we show that these methods outperform SotA baselines from the literature.} }
Endnote
%0 Conference Paper %T Scalable Deep Reinforcement Learning Algorithms for Mean Field Games %A Mathieu Lauriere %A Sarah Perrin %A Sertan Girgin %A Paul Muller %A Ayush Jain %A Theophile Cabannes %A Georgios Piliouras %A Julien Perolat %A Romuald Elie %A Olivier Pietquin %A Matthieu Geist %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-lauriere22a %I PMLR %P 12078--12095 %U https://proceedings.mlr.press/v162/lauriere22a.html %V 162 %X Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents. Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods. One limiting factor to further scale up using RL is that existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values. This is far from being trivial in the case of non-linear function approximation that enjoy good generalization properties, e.g. neural networks. We propose two methods to address this shortcoming. The first one learns a mixed strategy from distillation of historical data into a neural network and is applied to the Fictitious Play algorithm. The second one is an online mixing method based on regularization that does not require memorizing historical data or previous estimates. It is used to extend Online Mirror Descent. We demonstrate numerically that these methods efficiently enable the use of Deep RL algorithms to solve various MFGs. In addition, we show that these methods outperform SotA baselines from the literature.
APA
Lauriere, M., Perrin, S., Girgin, S., Muller, P., Jain, A., Cabannes, T., Piliouras, G., Perolat, J., Elie, R., Pietquin, O. & Geist, M.. (2022). Scalable Deep Reinforcement Learning Algorithms for Mean Field Games. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:12078-12095 Available from https://proceedings.mlr.press/v162/lauriere22a.html.

Related Material