Reinforcement Learning with Deep Energy-Based Policies

Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1352-1361, 2017.

Abstract

We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-haarnoja17a, title = {Reinforcement Learning with Deep Energy-Based Policies}, author = {Tuomas Haarnoja and Haoran Tang and Pieter Abbeel and Sergey Levine}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1352--1361}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/haarnoja17a/haarnoja17a.pdf}, url = {https://proceedings.mlr.press/v70/haarnoja17a.html}, abstract = {We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.} }
Endnote
%0 Conference Paper %T Reinforcement Learning with Deep Energy-Based Policies %A Tuomas Haarnoja %A Haoran Tang %A Pieter Abbeel %A Sergey Levine %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-haarnoja17a %I PMLR %P 1352--1361 %U https://proceedings.mlr.press/v70/haarnoja17a.html %V 70 %X We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.
APA
Haarnoja, T., Tang, H., Abbeel, P. & Levine, S.. (2017). Reinforcement Learning with Deep Energy-Based Policies. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1352-1361 Available from https://proceedings.mlr.press/v70/haarnoja17a.html.

Related Material