Trajectory-Based Off-Policy Deep Reinforcement Learning

Andreas Doerr, Michael Volpp, Marc Toussaint, Trimpe Sebastian, Christian Daniel
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1636-1645, 2019.

Abstract

Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-doerr19a, title = {Trajectory-Based Off-Policy Deep Reinforcement Learning}, author = {Doerr, Andreas and Volpp, Michael and Toussaint, Marc and Sebastian, Trimpe and Daniel, Christian}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1636--1645}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/doerr19a/doerr19a.pdf}, url = {https://proceedings.mlr.press/v97/doerr19a.html}, abstract = {Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.} }
Endnote
%0 Conference Paper %T Trajectory-Based Off-Policy Deep Reinforcement Learning %A Andreas Doerr %A Michael Volpp %A Marc Toussaint %A Trimpe Sebastian %A Christian Daniel %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-doerr19a %I PMLR %P 1636--1645 %U https://proceedings.mlr.press/v97/doerr19a.html %V 97 %X Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.
APA
Doerr, A., Volpp, M., Toussaint, M., Sebastian, T. & Daniel, C.. (2019). Trajectory-Based Off-Policy Deep Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1636-1645 Available from https://proceedings.mlr.press/v97/doerr19a.html.

Related Material