Hierarchical Imitation and Reinforcement Learning

Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dudik, Yisong Yue, Hal Daumé III
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2917-2926, 2018.

Abstract

We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-le18a, title = {Hierarchical Imitation and Reinforcement Learning}, author = {Le, Hoang and Jiang, Nan and Agarwal, Alekh and Dudik, Miroslav and Yue, Yisong and Daum{\'e}, III, Hal}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2917--2926}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/le18a/le18a.pdf}, url = {https://proceedings.mlr.press/v80/le18a.html}, abstract = {We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.} }
Endnote
%0 Conference Paper %T Hierarchical Imitation and Reinforcement Learning %A Hoang Le %A Nan Jiang %A Alekh Agarwal %A Miroslav Dudik %A Yisong Yue %A Hal Daumé, III %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-le18a %I PMLR %P 2917--2926 %U https://proceedings.mlr.press/v80/le18a.html %V 80 %X We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.
APA
Le, H., Jiang, N., Agarwal, A., Dudik, M., Yue, Y. & Daumé, III, H.. (2018). Hierarchical Imitation and Reinforcement Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2917-2926 Available from https://proceedings.mlr.press/v80/le18a.html.

Related Material