Goal-Conditioned Reinforcement Learning with Imagined Subgoals

Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1430-1440, 2021.

Abstract

Goal-conditioned reinforcement learning endows an agent with a large variety of skills, but it often struggles to solve tasks that require more temporally extended reasoning. In this work, we propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks. Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic. This high-level policy predicts intermediate states halfway to the goal using the value function as a reachability metric. We don’t require the policy to reach these subgoals explicitly. Instead, we use them to define a prior policy, and incorporate this prior into a KL-constrained policy iteration scheme to speed up and regularize learning. Imagined subgoals are used during policy learning, but not during test time, where we only apply the learned policy. We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-chane-sane21a, title = {Goal-Conditioned Reinforcement Learning with Imagined Subgoals}, author = {Chane-Sane, Elliot and Schmid, Cordelia and Laptev, Ivan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1430--1440}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/chane-sane21a/chane-sane21a.pdf}, url = {https://proceedings.mlr.press/v139/chane-sane21a.html}, abstract = {Goal-conditioned reinforcement learning endows an agent with a large variety of skills, but it often struggles to solve tasks that require more temporally extended reasoning. In this work, we propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks. Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic. This high-level policy predicts intermediate states halfway to the goal using the value function as a reachability metric. We don’t require the policy to reach these subgoals explicitly. Instead, we use them to define a prior policy, and incorporate this prior into a KL-constrained policy iteration scheme to speed up and regularize learning. Imagined subgoals are used during policy learning, but not during test time, where we only apply the learned policy. We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.} }
Endnote
%0 Conference Paper %T Goal-Conditioned Reinforcement Learning with Imagined Subgoals %A Elliot Chane-Sane %A Cordelia Schmid %A Ivan Laptev %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-chane-sane21a %I PMLR %P 1430--1440 %U https://proceedings.mlr.press/v139/chane-sane21a.html %V 139 %X Goal-conditioned reinforcement learning endows an agent with a large variety of skills, but it often struggles to solve tasks that require more temporally extended reasoning. In this work, we propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks. Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic. This high-level policy predicts intermediate states halfway to the goal using the value function as a reachability metric. We don’t require the policy to reach these subgoals explicitly. Instead, we use them to define a prior policy, and incorporate this prior into a KL-constrained policy iteration scheme to speed up and regularize learning. Imagined subgoals are used during policy learning, but not during test time, where we only apply the learned policy. We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.
APA
Chane-Sane, E., Schmid, C. & Laptev, I.. (2021). Goal-Conditioned Reinforcement Learning with Imagined Subgoals. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1430-1440 Available from https://proceedings.mlr.press/v139/chane-sane21a.html.

Related Material