Deep Variational Reinforcement Learning for POMDPs

Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, Shimon Whiteson
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2117-2126, 2018.

Abstract

Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-igl18a, title = {Deep Variational Reinforcement Learning for {POMDP}s}, author = {Igl, Maximilian and Zintgraf, Luisa and Le, Tuan Anh and Wood, Frank and Whiteson, Shimon}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2117--2126}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/igl18a/igl18a.pdf}, url = {https://proceedings.mlr.press/v80/igl18a.html}, abstract = {Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.} }
Endnote
%0 Conference Paper %T Deep Variational Reinforcement Learning for POMDPs %A Maximilian Igl %A Luisa Zintgraf %A Tuan Anh Le %A Frank Wood %A Shimon Whiteson %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-igl18a %I PMLR %P 2117--2126 %U https://proceedings.mlr.press/v80/igl18a.html %V 80 %X Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.
APA
Igl, M., Zintgraf, L., Le, T.A., Wood, F. & Whiteson, S.. (2018). Deep Variational Reinforcement Learning for POMDPs. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2117-2126 Available from https://proceedings.mlr.press/v80/igl18a.html.

Related Material