Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation

Dane Corneil, Wulfram Gerstner, Johanni Brea
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1049-1058, 2018.

Abstract

Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i.e. they typically require significantly more playing experience than humans to reach an equal performance level. To improve sample efficiency, an agent may build a model of the environment and use planning methods to update its policy. In this article we introduce Variational State Tabulation (VaST), which maps an environment with a high-dimensional state space (e.g. the space of visual inputs) to an abstract tabular model. Prioritized sweeping with small backups, a highly efficient planning method, can then be used to update state-action values. We show how VaST can rapidly learn to maximize reward in tasks like 3D navigation and efficiently adapt to sudden changes in rewards or transition probabilities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-corneil18a, title = {Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation}, author = {Corneil, Dane and Gerstner, Wulfram and Brea, Johanni}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1049--1058}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/corneil18a/corneil18a.pdf}, url = {https://proceedings.mlr.press/v80/corneil18a.html}, abstract = {Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i.e. they typically require significantly more playing experience than humans to reach an equal performance level. To improve sample efficiency, an agent may build a model of the environment and use planning methods to update its policy. In this article we introduce Variational State Tabulation (VaST), which maps an environment with a high-dimensional state space (e.g. the space of visual inputs) to an abstract tabular model. Prioritized sweeping with small backups, a highly efficient planning method, can then be used to update state-action values. We show how VaST can rapidly learn to maximize reward in tasks like 3D navigation and efficiently adapt to sudden changes in rewards or transition probabilities.} }
Endnote
%0 Conference Paper %T Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation %A Dane Corneil %A Wulfram Gerstner %A Johanni Brea %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-corneil18a %I PMLR %P 1049--1058 %U https://proceedings.mlr.press/v80/corneil18a.html %V 80 %X Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i.e. they typically require significantly more playing experience than humans to reach an equal performance level. To improve sample efficiency, an agent may build a model of the environment and use planning methods to update its policy. In this article we introduce Variational State Tabulation (VaST), which maps an environment with a high-dimensional state space (e.g. the space of visual inputs) to an abstract tabular model. Prioritized sweeping with small backups, a highly efficient planning method, can then be used to update state-action values. We show how VaST can rapidly learn to maximize reward in tasks like 3D navigation and efficiently adapt to sudden changes in rewards or transition probabilities.
APA
Corneil, D., Gerstner, W. & Brea, J.. (2018). Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1049-1058 Available from https://proceedings.mlr.press/v80/corneil18a.html.

Related Material