Exploration-Exploitation in MDPs with Options

Ronan Fruit, Alessandro Lazaric
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:576-584, 2017.

Abstract

While a large body of empirical results show that temporally-extended actions and options may significantly affect the learning performance of an agent, the theoretical understanding of how and when options can be beneficial in online reinforcement learning is relatively limited. In this paper, we derive an upper and lower bound on the regret of a variant of UCRL using options. While we first analyze the algorithm in the general case of semi-Markov decision processes (SMDPs), we show how these results can be translated to the specific case of MDPs with options and we illustrate simple scenarios in which the regret of learning with options can be provably much smaller than the regret suffered when learning with primitive actions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-fruit17a, title = {{Exploration-Exploitation in MDPs with Options}}, author = {Fruit, Ronan and Lazaric, Alessandro}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {576--584}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/fruit17a/fruit17a.pdf}, url = {https://proceedings.mlr.press/v54/fruit17a.html}, abstract = {While a large body of empirical results show that temporally-extended actions and options may significantly affect the learning performance of an agent, the theoretical understanding of how and when options can be beneficial in online reinforcement learning is relatively limited. In this paper, we derive an upper and lower bound on the regret of a variant of UCRL using options. While we first analyze the algorithm in the general case of semi-Markov decision processes (SMDPs), we show how these results can be translated to the specific case of MDPs with options and we illustrate simple scenarios in which the regret of learning with options can be provably much smaller than the regret suffered when learning with primitive actions. } }
Endnote
%0 Conference Paper %T Exploration-Exploitation in MDPs with Options %A Ronan Fruit %A Alessandro Lazaric %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-fruit17a %I PMLR %P 576--584 %U https://proceedings.mlr.press/v54/fruit17a.html %V 54 %X While a large body of empirical results show that temporally-extended actions and options may significantly affect the learning performance of an agent, the theoretical understanding of how and when options can be beneficial in online reinforcement learning is relatively limited. In this paper, we derive an upper and lower bound on the regret of a variant of UCRL using options. While we first analyze the algorithm in the general case of semi-Markov decision processes (SMDPs), we show how these results can be translated to the specific case of MDPs with options and we illustrate simple scenarios in which the regret of learning with options can be provably much smaller than the regret suffered when learning with primitive actions.
APA
Fruit, R. & Lazaric, A.. (2017). Exploration-Exploitation in MDPs with Options. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:576-584 Available from https://proceedings.mlr.press/v54/fruit17a.html.

Related Material