Fast computation of Nash Equilibria in Imperfect Information Games

Remi Munos, Julien Perolat, Jean-Baptiste Lespiau, Mark Rowland, Bart De Vylder, Marc Lanctot, Finbarr Timbers, Daniel Hennes, Shayegan Omidshafiei, Audrunas Gruslys, Mohammad Gheshlaghi Azar, Edward Lockhart, Karl Tuyls
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7119-7129, 2020.

Abstract

We introduce and analyze a class of algorithms, called Mirror Ascent against an Improved Opponent (MAIO), for computing Nash equilibria in two-player zero-sum games, both in normal form and in sequential form with imperfect information. These algorithms update the policy of each player with a mirror-ascent step to maximize the value of playing against an improved opponent. An improved opponent can be a best response, a greedy policy, a policy improved by policy gradient, or by any other reinforcement learning or search techniques. We establish a convergence result of the last iterate to the set of Nash equilibria and show that the speed of convergence depends on the amount of improvement offered by these improved policies. In addition, we show that under some condition, if we use a best response as improved policy, then an exponential convergence rate is achieved.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-munos20a, title = {Fast computation of {N}ash Equilibria in Imperfect Information Games}, author = {Munos, Remi and Perolat, Julien and Lespiau, Jean-Baptiste and Rowland, Mark and De Vylder, Bart and Lanctot, Marc and Timbers, Finbarr and Hennes, Daniel and Omidshafiei, Shayegan and Gruslys, Audrunas and Azar, Mohammad Gheshlaghi and Lockhart, Edward and Tuyls, Karl}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7119--7129}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/munos20a/munos20a.pdf}, url = {https://proceedings.mlr.press/v119/munos20a.html}, abstract = {We introduce and analyze a class of algorithms, called Mirror Ascent against an Improved Opponent (MAIO), for computing Nash equilibria in two-player zero-sum games, both in normal form and in sequential form with imperfect information. These algorithms update the policy of each player with a mirror-ascent step to maximize the value of playing against an improved opponent. An improved opponent can be a best response, a greedy policy, a policy improved by policy gradient, or by any other reinforcement learning or search techniques. We establish a convergence result of the last iterate to the set of Nash equilibria and show that the speed of convergence depends on the amount of improvement offered by these improved policies. In addition, we show that under some condition, if we use a best response as improved policy, then an exponential convergence rate is achieved.} }
Endnote
%0 Conference Paper %T Fast computation of Nash Equilibria in Imperfect Information Games %A Remi Munos %A Julien Perolat %A Jean-Baptiste Lespiau %A Mark Rowland %A Bart De Vylder %A Marc Lanctot %A Finbarr Timbers %A Daniel Hennes %A Shayegan Omidshafiei %A Audrunas Gruslys %A Mohammad Gheshlaghi Azar %A Edward Lockhart %A Karl Tuyls %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-munos20a %I PMLR %P 7119--7129 %U https://proceedings.mlr.press/v119/munos20a.html %V 119 %X We introduce and analyze a class of algorithms, called Mirror Ascent against an Improved Opponent (MAIO), for computing Nash equilibria in two-player zero-sum games, both in normal form and in sequential form with imperfect information. These algorithms update the policy of each player with a mirror-ascent step to maximize the value of playing against an improved opponent. An improved opponent can be a best response, a greedy policy, a policy improved by policy gradient, or by any other reinforcement learning or search techniques. We establish a convergence result of the last iterate to the set of Nash equilibria and show that the speed of convergence depends on the amount of improvement offered by these improved policies. In addition, we show that under some condition, if we use a best response as improved policy, then an exponential convergence rate is achieved.
APA
Munos, R., Perolat, J., Lespiau, J., Rowland, M., De Vylder, B., Lanctot, M., Timbers, F., Hennes, D., Omidshafiei, S., Gruslys, A., Azar, M.G., Lockhart, E. & Tuyls, K.. (2020). Fast computation of Nash Equilibria in Imperfect Information Games. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7119-7129 Available from https://proceedings.mlr.press/v119/munos20a.html.

Related Material