Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN

Dror Freirich, Tzahi Shimkin, Ron Meir, Aviv Tamar
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1983-1992, 2019.

Abstract

The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of high dimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, and we exploit this idea to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-freirich19a, title = {Distributional Multivariate Policy Evaluation and Exploration with the {B}ellman {GAN}}, author = {Freirich, Dror and Shimkin, Tzahi and Meir, Ron and Tamar, Aviv}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1983--1992}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/freirich19a/freirich19a.pdf}, url = {https://proceedings.mlr.press/v97/freirich19a.html}, abstract = {The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of high dimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, and we exploit this idea to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.} }
Endnote
%0 Conference Paper %T Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN %A Dror Freirich %A Tzahi Shimkin %A Ron Meir %A Aviv Tamar %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-freirich19a %I PMLR %P 1983--1992 %U https://proceedings.mlr.press/v97/freirich19a.html %V 97 %X The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of high dimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, and we exploit this idea to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.
APA
Freirich, D., Shimkin, T., Meir, R. & Tamar, A.. (2019). Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1983-1992 Available from https://proceedings.mlr.press/v97/freirich19a.html.

Related Material