LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework

Woojun Kim, Jeonghye Kim, Youngchul Sung
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:16619-16638, 2023.

Abstract

In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic architecture. The proposed framework learns to integrate a set of diverse exploration strategies so that the agent can adaptively select the most effective exploration strategy to realize an effective exploration-exploitation trade-off for each given task. The effectiveness of the proposed exploration framework is demonstrated by various experiments in the MiniGrid and Atari environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kim23k, title = {{LESSON}: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework}, author = {Kim, Woojun and Kim, Jeonghye and Sung, Youngchul}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {16619--16638}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kim23k/kim23k.pdf}, url = {https://proceedings.mlr.press/v202/kim23k.html}, abstract = {In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic architecture. The proposed framework learns to integrate a set of diverse exploration strategies so that the agent can adaptively select the most effective exploration strategy to realize an effective exploration-exploitation trade-off for each given task. The effectiveness of the proposed exploration framework is demonstrated by various experiments in the MiniGrid and Atari environments.} }
Endnote
%0 Conference Paper %T LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework %A Woojun Kim %A Jeonghye Kim %A Youngchul Sung %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kim23k %I PMLR %P 16619--16638 %U https://proceedings.mlr.press/v202/kim23k.html %V 202 %X In this paper, a unified framework for exploration in reinforcement learning (RL) is proposed based on an option-critic architecture. The proposed framework learns to integrate a set of diverse exploration strategies so that the agent can adaptively select the most effective exploration strategy to realize an effective exploration-exploitation trade-off for each given task. The effectiveness of the proposed exploration framework is demonstrated by various experiments in the MiniGrid and Atari environments.
APA
Kim, W., Kim, J. & Sung, Y.. (2023). LESSON: Learning to Integrate Exploration Strategies for Reinforcement Learning via an Option Framework. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:16619-16638 Available from https://proceedings.mlr.press/v202/kim23k.html.

Related Material