Self-Imitation Learning

Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3878-3887, 2018.

Abstract

This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-oh18b, title = {Self-Imitation Learning}, author = {Oh, Junhyuk and Guo, Yijie and Singh, Satinder and Lee, Honglak}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3878--3887}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/oh18b/oh18b.pdf}, url = {https://proceedings.mlr.press/v80/oh18b.html}, abstract = {This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.} }
Endnote
%0 Conference Paper %T Self-Imitation Learning %A Junhyuk Oh %A Yijie Guo %A Satinder Singh %A Honglak Lee %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-oh18b %I PMLR %P 3878--3887 %U https://proceedings.mlr.press/v80/oh18b.html %V 80 %X This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.
APA
Oh, J., Guo, Y., Singh, S. & Lee, H.. (2018). Self-Imitation Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3878-3887 Available from https://proceedings.mlr.press/v80/oh18b.html.

Related Material