GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms

Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1039-1048, 2018.

Abstract

In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, Quality-Diversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG . We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-colas18a, title = {{GEP}-{PG}: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms}, author = {Colas, C{\'e}dric and Sigaud, Olivier and Oudeyer, Pierre-Yves}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1039--1048}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/colas18a/colas18a.pdf}, url = {http://proceedings.mlr.press/v80/colas18a.html}, abstract = {In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, Quality-Diversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG . We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments.} }
Endnote
%0 Conference Paper %T GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms %A Cédric Colas %A Olivier Sigaud %A Pierre-Yves Oudeyer %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-colas18a %I PMLR %P 1039--1048 %U http://proceedings.mlr.press/v80/colas18a.html %V 80 %X In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, Quality-Diversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG . We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments.
APA
Colas, C., Sigaud, O. & Oudeyer, P.. (2018). GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1039-1048 Available from http://proceedings.mlr.press/v80/colas18a.html.

Related Material