Learning by Playing Solving Sparse Reward Tasks from Scratch

Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Wiele, Vlad Mnih, Nicolas Heess, Jost Tobias Springenberg
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4344-4353, 2018.

Abstract

We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-riedmiller18a, title = {Learning by Playing Solving Sparse Reward Tasks from Scratch}, author = {Riedmiller, Martin and Hafner, Roland and Lampe, Thomas and Neunert, Michael and Degrave, Jonas and van de Wiele, Tom and Mnih, Vlad and Heess, Nicolas and Springenberg, Jost Tobias}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4344--4353}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/riedmiller18a/riedmiller18a.pdf}, url = {https://proceedings.mlr.press/v80/riedmiller18a.html}, abstract = {We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.} }
Endnote
%0 Conference Paper %T Learning by Playing Solving Sparse Reward Tasks from Scratch %A Martin Riedmiller %A Roland Hafner %A Thomas Lampe %A Michael Neunert %A Jonas Degrave %A Tom Wiele %A Vlad Mnih %A Nicolas Heess %A Jost Tobias Springenberg %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-riedmiller18a %I PMLR %P 4344--4353 %U https://proceedings.mlr.press/v80/riedmiller18a.html %V 80 %X We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.
APA
Riedmiller, M., Hafner, R., Lampe, T., Neunert, M., Degrave, J., Wiele, T., Mnih, V., Heess, N. & Springenberg, J.T.. (2018). Learning by Playing Solving Sparse Reward Tasks from Scratch. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4344-4353 Available from https://proceedings.mlr.press/v80/riedmiller18a.html.

Related Material