Composing Entropic Policies using Divergence Correction

Jonathan Hunt, Andre Barreto, Timothy Lillicrap, Nicolas Heess
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2911-2920, 2019.

Abstract

Composing skills mastered in one task to solve novel tasks promises dramatic improvements in the data efficiency of reinforcement learning. Here, we analyze two recent works composing behaviors represented in the form of action-value functions and show that they perform poorly in some situations. As part of this analysis, we extend an important generalization of policy improvement to the maximum entropy framework and introduce an algorithm for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which addresses the failure cases of prior work and, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between base policies. We study this approach in the tabular case and on non-trivial continuous control problems with compositional structure and show that it outperforms or matches existing methods across all tasks considered.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-hunt19a, title = {Composing Entropic Policies using Divergence Correction}, author = {Hunt, Jonathan and Barreto, Andre and Lillicrap, Timothy and Heess, Nicolas}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2911--2920}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/hunt19a/hunt19a.pdf}, url = {https://proceedings.mlr.press/v97/hunt19a.html}, abstract = {Composing skills mastered in one task to solve novel tasks promises dramatic improvements in the data efficiency of reinforcement learning. Here, we analyze two recent works composing behaviors represented in the form of action-value functions and show that they perform poorly in some situations. As part of this analysis, we extend an important generalization of policy improvement to the maximum entropy framework and introduce an algorithm for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which addresses the failure cases of prior work and, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between base policies. We study this approach in the tabular case and on non-trivial continuous control problems with compositional structure and show that it outperforms or matches existing methods across all tasks considered.} }
Endnote
%0 Conference Paper %T Composing Entropic Policies using Divergence Correction %A Jonathan Hunt %A Andre Barreto %A Timothy Lillicrap %A Nicolas Heess %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-hunt19a %I PMLR %P 2911--2920 %U https://proceedings.mlr.press/v97/hunt19a.html %V 97 %X Composing skills mastered in one task to solve novel tasks promises dramatic improvements in the data efficiency of reinforcement learning. Here, we analyze two recent works composing behaviors represented in the form of action-value functions and show that they perform poorly in some situations. As part of this analysis, we extend an important generalization of policy improvement to the maximum entropy framework and introduce an algorithm for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which addresses the failure cases of prior work and, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between base policies. We study this approach in the tabular case and on non-trivial continuous control problems with compositional structure and show that it outperforms or matches existing methods across all tasks considered.
APA
Hunt, J., Barreto, A., Lillicrap, T. & Heess, N.. (2019). Composing Entropic Policies using Divergence Correction. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2911-2920 Available from https://proceedings.mlr.press/v97/hunt19a.html.

Related Material