Learning Extensive-Form Perfect Equilibria in Two-Player Zero-Sum Sequential Games

Martino Bernasconi, Alberto Marchesi, Francesco Trovò
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2152-2160, 2024.

Abstract

Designing efficient algorithms for computing refinements of the Nash equilibrium (NE) in two-player zero-sum sequential games is of paramount importance, since the NE may prescribe sub-optimal actions off the equilibrium path. The extensive-form perfect equilibrium (EFPE) amends such a weakness by accounting for the possibility that players may make mistakes. This is crucial in the real world, which involves humans with bounded rationality, and it is also key in boosting superhuman agents for games like Poker. Nevertheless, there are only few algorithms for computing NE refinements, which either lack convergence guarantees to exact equilibria or do not scale to large games. We provide the first efficient iterative algorithm that provably converges to an EFPE in two-player zero-sum sequential games. Our algorithm works by tracking a sequence of equilibria of regularized-perturbed games, by using a procedure that is specifically tailored to converge last iterate to such equilibria. The procedure can be implemented efficiently by visiting the game tree, making our method computationally appealing. We also empirically evaluate our algorithm, showing that its strategies are much more robust to players’ mistakes than those of state-of-the-art algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-bernasconi24a, title = {Learning Extensive-Form Perfect Equilibria in Two-Player Zero-Sum Sequential Games}, author = {Bernasconi, Martino and Marchesi, Alberto and Trov\`{o}, Francesco}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2152--2160}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/bernasconi24a/bernasconi24a.pdf}, url = {https://proceedings.mlr.press/v238/bernasconi24a.html}, abstract = {Designing efficient algorithms for computing refinements of the Nash equilibrium (NE) in two-player zero-sum sequential games is of paramount importance, since the NE may prescribe sub-optimal actions off the equilibrium path. The extensive-form perfect equilibrium (EFPE) amends such a weakness by accounting for the possibility that players may make mistakes. This is crucial in the real world, which involves humans with bounded rationality, and it is also key in boosting superhuman agents for games like Poker. Nevertheless, there are only few algorithms for computing NE refinements, which either lack convergence guarantees to exact equilibria or do not scale to large games. We provide the first efficient iterative algorithm that provably converges to an EFPE in two-player zero-sum sequential games. Our algorithm works by tracking a sequence of equilibria of regularized-perturbed games, by using a procedure that is specifically tailored to converge last iterate to such equilibria. The procedure can be implemented efficiently by visiting the game tree, making our method computationally appealing. We also empirically evaluate our algorithm, showing that its strategies are much more robust to players’ mistakes than those of state-of-the-art algorithms.} }
Endnote
%0 Conference Paper %T Learning Extensive-Form Perfect Equilibria in Two-Player Zero-Sum Sequential Games %A Martino Bernasconi %A Alberto Marchesi %A Francesco Trovò %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-bernasconi24a %I PMLR %P 2152--2160 %U https://proceedings.mlr.press/v238/bernasconi24a.html %V 238 %X Designing efficient algorithms for computing refinements of the Nash equilibrium (NE) in two-player zero-sum sequential games is of paramount importance, since the NE may prescribe sub-optimal actions off the equilibrium path. The extensive-form perfect equilibrium (EFPE) amends such a weakness by accounting for the possibility that players may make mistakes. This is crucial in the real world, which involves humans with bounded rationality, and it is also key in boosting superhuman agents for games like Poker. Nevertheless, there are only few algorithms for computing NE refinements, which either lack convergence guarantees to exact equilibria or do not scale to large games. We provide the first efficient iterative algorithm that provably converges to an EFPE in two-player zero-sum sequential games. Our algorithm works by tracking a sequence of equilibria of regularized-perturbed games, by using a procedure that is specifically tailored to converge last iterate to such equilibria. The procedure can be implemented efficiently by visiting the game tree, making our method computationally appealing. We also empirically evaluate our algorithm, showing that its strategies are much more robust to players’ mistakes than those of state-of-the-art algorithms.
APA
Bernasconi, M., Marchesi, A. & Trovò, F.. (2024). Learning Extensive-Form Perfect Equilibria in Two-Player Zero-Sum Sequential Games. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2152-2160 Available from https://proceedings.mlr.press/v238/bernasconi24a.html.

Related Material