Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling

Kyowoon Lee, Sol-A Kim, Jaesik Choi, Seong-Whan Lee
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2937-2946, 2018.

Abstract

Many real-world applications of reinforcement learning require an agent to select optimal actions from continuous spaces. Recently, deep neural networks have successfully been applied to games with discrete actions spaces. However, deep neural networks for discrete actions are not suitable for devising strategies for games where a very small change in an action can dramatically affect the outcome. In this paper, we present a new self-play reinforcement learning framework which equips a continuous search algorithm which enables to search in continuous action spaces with a kernel regression method. Without any hand-crafted features, our network is trained by supervised learning followed by self-play reinforcement learning with a high-fidelity simulator for the Olympic sport of curling. The program trained under our framework outperforms existing programs equipped with several hand-crafted features and won an international digital curling competition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-lee18b, title = {Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling}, author = {Lee, Kyowoon and Kim, Sol-A and Choi, Jaesik and Lee, Seong-Whan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2937--2946}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/lee18b/lee18b.pdf}, url = {https://proceedings.mlr.press/v80/lee18b.html}, abstract = {Many real-world applications of reinforcement learning require an agent to select optimal actions from continuous spaces. Recently, deep neural networks have successfully been applied to games with discrete actions spaces. However, deep neural networks for discrete actions are not suitable for devising strategies for games where a very small change in an action can dramatically affect the outcome. In this paper, we present a new self-play reinforcement learning framework which equips a continuous search algorithm which enables to search in continuous action spaces with a kernel regression method. Without any hand-crafted features, our network is trained by supervised learning followed by self-play reinforcement learning with a high-fidelity simulator for the Olympic sport of curling. The program trained under our framework outperforms existing programs equipped with several hand-crafted features and won an international digital curling competition.} }
Endnote
%0 Conference Paper %T Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling %A Kyowoon Lee %A Sol-A Kim %A Jaesik Choi %A Seong-Whan Lee %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-lee18b %I PMLR %P 2937--2946 %U https://proceedings.mlr.press/v80/lee18b.html %V 80 %X Many real-world applications of reinforcement learning require an agent to select optimal actions from continuous spaces. Recently, deep neural networks have successfully been applied to games with discrete actions spaces. However, deep neural networks for discrete actions are not suitable for devising strategies for games where a very small change in an action can dramatically affect the outcome. In this paper, we present a new self-play reinforcement learning framework which equips a continuous search algorithm which enables to search in continuous action spaces with a kernel regression method. Without any hand-crafted features, our network is trained by supervised learning followed by self-play reinforcement learning with a high-fidelity simulator for the Olympic sport of curling. The program trained under our framework outperforms existing programs equipped with several hand-crafted features and won an international digital curling competition.
APA
Lee, K., Kim, S., Choi, J. & Lee, S.. (2018). Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2937-2946 Available from https://proceedings.mlr.press/v80/lee18b.html.

Related Material