Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty

Youngjin Kim, Wontae Nam, Hyunwoo Kim, Ji-Hoon Kim, Gunhee Kim
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3379-3388, 2019.

Abstract

Exploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-kim19c, title = {Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty}, author = {Kim, Youngjin and Nam, Wontae and Kim, Hyunwoo and Kim, Ji-Hoon and Kim, Gunhee}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3379--3388}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/kim19c/kim19c.pdf}, url = {https://proceedings.mlr.press/v97/kim19c.html}, abstract = {Exploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.} }
Endnote
%0 Conference Paper %T Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty %A Youngjin Kim %A Wontae Nam %A Hyunwoo Kim %A Ji-Hoon Kim %A Gunhee Kim %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-kim19c %I PMLR %P 3379--3388 %U https://proceedings.mlr.press/v97/kim19c.html %V 97 %X Exploration based on state novelty has brought great success in challenging reinforcement learning problems with sparse rewards. However, existing novelty-based strategies become inefficient in real-world problems where observation contains not only task-dependent state novelty of our interest but also task-irrelevant information that should be ignored. We introduce an information- theoretic exploration strategy named Curiosity-Bottleneck that distills task-relevant information from observation. Based on the information bottleneck principle, our exploration bonus is quantified as the compressiveness of observation with respect to the learned representation of a compressive value network. With extensive experiments on static image classification, grid-world and three hard-exploration Atari games, we show that Curiosity-Bottleneck learns an effective exploration strategy by robustly measuring the state novelty in distractive environments where state-of-the-art exploration methods often degenerate.
APA
Kim, Y., Nam, W., Kim, H., Kim, J. & Kim, G.. (2019). Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3379-3388 Available from https://proceedings.mlr.press/v97/kim19c.html.

Related Material