EMI: Exploration with Mutual Information

Hyoungseok Kim, Jaekyeom Kim, Yeonwoo Jeong, Sergey Levine, Hyun Oh Song
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3360-3369, 2019.

Abstract

Reinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at https://github.com/snu-mllab/EMI.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-kim19a, title = {{EMI}: Exploration with Mutual Information}, author = {Kim, Hyoungseok and Kim, Jaekyeom and Jeong, Yeonwoo and Levine, Sergey and Song, Hyun Oh}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3360--3369}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/kim19a/kim19a.pdf}, url = {https://proceedings.mlr.press/v97/kim19a.html}, abstract = {Reinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at https://github.com/snu-mllab/EMI.} }
Endnote
%0 Conference Paper %T EMI: Exploration with Mutual Information %A Hyoungseok Kim %A Jaekyeom Kim %A Yeonwoo Jeong %A Sergey Levine %A Hyun Oh Song %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-kim19a %I PMLR %P 3360--3369 %U https://proceedings.mlr.press/v97/kim19a.html %V 97 %X Reinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at https://github.com/snu-mllab/EMI.
APA
Kim, H., Kim, J., Jeong, Y., Levine, S. & Song, H.O.. (2019). EMI: Exploration with Mutual Information. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3360-3369 Available from https://proceedings.mlr.press/v97/kim19a.html.

Related Material