Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies

Tom Zahavy, Avinatan Hasidim, Haim Kaplan, Yishay Mansour
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:906-934, 2020.

Abstract

We consider a setting of hierarchical reinforcement learning, in which the reward is a sum of components. For each component, we are given a policy that maximizes it, and our goal is to assemble a policy from the individual policies that maximize the sum of the components. We provide theoretical guarantees for assembling such policies in deterministic MDPs with collectible rewards. Our approach builds on formulating this problem as a traveling salesman problem with a discounted reward. We focus on local solutions, i.e., policies that only use information from the current state; thus, they are easy to implement and do not require substantial computational resources. We propose three local stochastic policies and prove that they guarantee better performance than any deterministic local policy in the worst case; experimental results suggest that they also perform better on average.

Cite this Paper


BibTeX
@InProceedings{pmlr-v117-zahavy20a, title = {Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies}, author = {Zahavy, Tom and Hasidim, Avinatan and Kaplan, Haim and Mansour, Yishay}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {906--934}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, month = {08 Feb--11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/zahavy20a/zahavy20a.pdf}, url = {https://proceedings.mlr.press/v117/zahavy20a.html}, abstract = {We consider a setting of hierarchical reinforcement learning, in which the reward is a sum of components. For each component, we are given a policy that maximizes it, and our goal is to assemble a policy from the individual policies that maximize the sum of the components. We provide theoretical guarantees for assembling such policies in deterministic MDPs with collectible rewards. Our approach builds on formulating this problem as a traveling salesman problem with a discounted reward. We focus on local solutions, i.e., policies that only use information from the current state; thus, they are easy to implement and do not require substantial computational resources. We propose three local stochastic policies and prove that they guarantee better performance than any deterministic local policy in the worst case; experimental results suggest that they also perform better on average.} }
Endnote
%0 Conference Paper %T Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies %A Tom Zahavy %A Avinatan Hasidim %A Haim Kaplan %A Yishay Mansour %B Proceedings of the 31st International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Aryeh Kontorovich %E Gergely Neu %F pmlr-v117-zahavy20a %I PMLR %P 906--934 %U https://proceedings.mlr.press/v117/zahavy20a.html %V 117 %X We consider a setting of hierarchical reinforcement learning, in which the reward is a sum of components. For each component, we are given a policy that maximizes it, and our goal is to assemble a policy from the individual policies that maximize the sum of the components. We provide theoretical guarantees for assembling such policies in deterministic MDPs with collectible rewards. Our approach builds on formulating this problem as a traveling salesman problem with a discounted reward. We focus on local solutions, i.e., policies that only use information from the current state; thus, they are easy to implement and do not require substantial computational resources. We propose three local stochastic policies and prove that they guarantee better performance than any deterministic local policy in the worst case; experimental results suggest that they also perform better on average.
APA
Zahavy, T., Hasidim, A., Kaplan, H. & Mansour, Y.. (2020). Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:906-934 Available from https://proceedings.mlr.press/v117/zahavy20a.html.

Related Material