Model-Based Active Exploration

Pranav Shyam, Wojciech Jaśkowski, Faustino Gomez
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5779-5788, 2019.

Abstract

Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations. This paper introduces an efficient active exploration algorithm, Model-Based Active eXploration (MAX), which uses an ensemble of forward models to plan to observe novel events. This is carried out by optimizing agent behaviour with respect to a measure of novelty derived from the Bayesian perspective of exploration, which is estimated using the disagreement between the futures predicted by the ensemble members. We show empirically that in semi-random discrete environments where directed exploration is critical to make progress, MAX is at least an order of magnitude more efficient than strong baselines. MAX scales to high-dimensional continuous environments where it builds task-agnostic models that can be used for any downstream task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-shyam19a, title = {Model-Based Active Exploration}, author = {Shyam, Pranav and Ja{\'{s}}kowski, Wojciech and Gomez, Faustino}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5779--5788}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/shyam19a/shyam19a.pdf}, url = {https://proceedings.mlr.press/v97/shyam19a.html}, abstract = {Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations. This paper introduces an efficient active exploration algorithm, Model-Based Active eXploration (MAX), which uses an ensemble of forward models to plan to observe novel events. This is carried out by optimizing agent behaviour with respect to a measure of novelty derived from the Bayesian perspective of exploration, which is estimated using the disagreement between the futures predicted by the ensemble members. We show empirically that in semi-random discrete environments where directed exploration is critical to make progress, MAX is at least an order of magnitude more efficient than strong baselines. MAX scales to high-dimensional continuous environments where it builds task-agnostic models that can be used for any downstream task.} }
Endnote
%0 Conference Paper %T Model-Based Active Exploration %A Pranav Shyam %A Wojciech Jaśkowski %A Faustino Gomez %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-shyam19a %I PMLR %P 5779--5788 %U https://proceedings.mlr.press/v97/shyam19a.html %V 97 %X Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations. This paper introduces an efficient active exploration algorithm, Model-Based Active eXploration (MAX), which uses an ensemble of forward models to plan to observe novel events. This is carried out by optimizing agent behaviour with respect to a measure of novelty derived from the Bayesian perspective of exploration, which is estimated using the disagreement between the futures predicted by the ensemble members. We show empirically that in semi-random discrete environments where directed exploration is critical to make progress, MAX is at least an order of magnitude more efficient than strong baselines. MAX scales to high-dimensional continuous environments where it builds task-agnostic models that can be used for any downstream task.
APA
Shyam, P., Jaśkowski, W. & Gomez, F.. (2019). Model-Based Active Exploration. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5779-5788 Available from https://proceedings.mlr.press/v97/shyam19a.html.

Related Material