Statistics and Samples in Distributional Reinforcement Learning

Mark Rowland, Robert Dadashi, Saurabh Kumar, Remi Munos, Marc G. Bellemare, Will Dabney
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5528-5536, 2019.

Abstract

We present a unifying framework for designing and analysing distributional reinforcement learning (DRL) algorithms in terms of recursively estimating statistics of the return distribution. Our key insight is that DRL algorithms can be decomposed as the combination of some statistical estimator and a method for imputing a return distribution consistent with that set of statistics. With this new understanding, we are able to provide improved analyses of existing DRL algorithms as well as construct a new algorithm (EDRL) based upon estimation of the expectiles of the return distribution. We compare EDRL with existing methods on a variety of MDPs to illustrate concrete aspects of our analysis, and develop a deep RL variant of the algorithm, ER-DQN, which we evaluate on the Atari-57 suite of games.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-rowland19a, title = {Statistics and Samples in Distributional Reinforcement Learning}, author = {Rowland, Mark and Dadashi, Robert and Kumar, Saurabh and Munos, Remi and Bellemare, Marc G. and Dabney, Will}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5528--5536}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/rowland19a/rowland19a.pdf}, url = {http://proceedings.mlr.press/v97/rowland19a.html}, abstract = {We present a unifying framework for designing and analysing distributional reinforcement learning (DRL) algorithms in terms of recursively estimating statistics of the return distribution. Our key insight is that DRL algorithms can be decomposed as the combination of some statistical estimator and a method for imputing a return distribution consistent with that set of statistics. With this new understanding, we are able to provide improved analyses of existing DRL algorithms as well as construct a new algorithm (EDRL) based upon estimation of the expectiles of the return distribution. We compare EDRL with existing methods on a variety of MDPs to illustrate concrete aspects of our analysis, and develop a deep RL variant of the algorithm, ER-DQN, which we evaluate on the Atari-57 suite of games.} }
Endnote
%0 Conference Paper %T Statistics and Samples in Distributional Reinforcement Learning %A Mark Rowland %A Robert Dadashi %A Saurabh Kumar %A Remi Munos %A Marc G. Bellemare %A Will Dabney %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-rowland19a %I PMLR %P 5528--5536 %U http://proceedings.mlr.press/v97/rowland19a.html %V 97 %X We present a unifying framework for designing and analysing distributional reinforcement learning (DRL) algorithms in terms of recursively estimating statistics of the return distribution. Our key insight is that DRL algorithms can be decomposed as the combination of some statistical estimator and a method for imputing a return distribution consistent with that set of statistics. With this new understanding, we are able to provide improved analyses of existing DRL algorithms as well as construct a new algorithm (EDRL) based upon estimation of the expectiles of the return distribution. We compare EDRL with existing methods on a variety of MDPs to illustrate concrete aspects of our analysis, and develop a deep RL variant of the algorithm, ER-DQN, which we evaluate on the Atari-57 suite of games.
APA
Rowland, M., Dadashi, R., Kumar, S., Munos, R., Bellemare, M.G. & Dabney, W.. (2019). Statistics and Samples in Distributional Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5528-5536 Available from http://proceedings.mlr.press/v97/rowland19a.html.

Related Material