Monte-Carlo Tree Search as Regularized Policy Optimization

Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, Ioannis Antonoglou, Remi Munos
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3769-3778, 2020.

Abstract

The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to groundbreaking results in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm still relies on handcrafted heuristics that are only partially understood. In this paper, we show that AlphaZero’s search heuristic, along with other common ones, can be interpreted as an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-grill20a, title = {{M}onte-{C}arlo Tree Search as Regularized Policy Optimization}, author = {Grill, Jean-Bastien and Altch{\'e}, Florent and Tang, Yunhao and Hubert, Thomas and Valko, Michal and Antonoglou, Ioannis and Munos, Remi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3769--3778}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/grill20a/grill20a.pdf}, url = {https://proceedings.mlr.press/v119/grill20a.html}, abstract = {The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to groundbreaking results in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm still relies on handcrafted heuristics that are only partially understood. In this paper, we show that AlphaZero’s search heuristic, along with other common ones, can be interpreted as an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.} }
Endnote
%0 Conference Paper %T Monte-Carlo Tree Search as Regularized Policy Optimization %A Jean-Bastien Grill %A Florent Altché %A Yunhao Tang %A Thomas Hubert %A Michal Valko %A Ioannis Antonoglou %A Remi Munos %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-grill20a %I PMLR %P 3769--3778 %U https://proceedings.mlr.press/v119/grill20a.html %V 119 %X The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to groundbreaking results in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm still relies on handcrafted heuristics that are only partially understood. In this paper, we show that AlphaZero’s search heuristic, along with other common ones, can be interpreted as an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.
APA
Grill, J., Altché, F., Tang, Y., Hubert, T., Valko, M., Antonoglou, I. & Munos, R.. (2020). Monte-Carlo Tree Search as Regularized Policy Optimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3769-3778 Available from https://proceedings.mlr.press/v119/grill20a.html.

Related Material