Low-Variance and Zero-Variance Baselines for Extensive-Form Games

Trevor Davis, Martin Schmid, Michael Bowling
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2392-2401, 2020.

Abstract

Extensive-form games (EFGs) are a common model of multi-agent interactions with imperfect information. State-of-the-art algorithms for solving these games typically perform full walks of the game tree that can prove prohibitively slow in large games. Alternatively, sampling-based methods such as Monte Carlo Counterfactual Regret Minimization walk one or more trajectories through the tree, touching only a fraction of the nodes on each iteration, at the expense of requiring more iterations to converge due to the variance of sampled values. In this paper, we extend recent work that uses baseline estimates to reduce this variance. We introduce a framework of baseline-corrected values in EFGs that generalizes the previous work. Within our framework, we propose new baseline functions that result in significantly reduced variance compared to existing techniques. We show that one particular choice of such a function — predictive baseline — is provably optimal under certain sampling schemes. This allows for efficient computation of zero-variance value estimates even along sampled trajectories.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-davis20a, title = {Low-Variance and Zero-Variance Baselines for Extensive-Form Games}, author = {Davis, Trevor and Schmid, Martin and Bowling, Michael}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2392--2401}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/davis20a/davis20a.pdf}, url = {https://proceedings.mlr.press/v119/davis20a.html}, abstract = {Extensive-form games (EFGs) are a common model of multi-agent interactions with imperfect information. State-of-the-art algorithms for solving these games typically perform full walks of the game tree that can prove prohibitively slow in large games. Alternatively, sampling-based methods such as Monte Carlo Counterfactual Regret Minimization walk one or more trajectories through the tree, touching only a fraction of the nodes on each iteration, at the expense of requiring more iterations to converge due to the variance of sampled values. In this paper, we extend recent work that uses baseline estimates to reduce this variance. We introduce a framework of baseline-corrected values in EFGs that generalizes the previous work. Within our framework, we propose new baseline functions that result in significantly reduced variance compared to existing techniques. We show that one particular choice of such a function — predictive baseline — is provably optimal under certain sampling schemes. This allows for efficient computation of zero-variance value estimates even along sampled trajectories.} }
Endnote
%0 Conference Paper %T Low-Variance and Zero-Variance Baselines for Extensive-Form Games %A Trevor Davis %A Martin Schmid %A Michael Bowling %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-davis20a %I PMLR %P 2392--2401 %U https://proceedings.mlr.press/v119/davis20a.html %V 119 %X Extensive-form games (EFGs) are a common model of multi-agent interactions with imperfect information. State-of-the-art algorithms for solving these games typically perform full walks of the game tree that can prove prohibitively slow in large games. Alternatively, sampling-based methods such as Monte Carlo Counterfactual Regret Minimization walk one or more trajectories through the tree, touching only a fraction of the nodes on each iteration, at the expense of requiring more iterations to converge due to the variance of sampled values. In this paper, we extend recent work that uses baseline estimates to reduce this variance. We introduce a framework of baseline-corrected values in EFGs that generalizes the previous work. Within our framework, we propose new baseline functions that result in significantly reduced variance compared to existing techniques. We show that one particular choice of such a function — predictive baseline — is provably optimal under certain sampling schemes. This allows for efficient computation of zero-variance value estimates even along sampled trajectories.
APA
Davis, T., Schmid, M. & Bowling, M.. (2020). Low-Variance and Zero-Variance Baselines for Extensive-Form Games. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2392-2401 Available from https://proceedings.mlr.press/v119/davis20a.html.

Related Material