The Uncertainty Bellman Equation and Exploration

Brendan O’Donoghue, Ian Osband, Remi Munos, Vlad Mnih
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3839-3848, 2018.

Abstract

We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN performance on 51 out of 57 games in the Atari suite.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-odonoghue18a, title = {The Uncertainty {B}ellman Equation and Exploration}, author = {O'Donoghue, Brendan and Osband, Ian and Munos, Remi and Mnih, Vlad}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3839--3848}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/odonoghue18a/odonoghue18a.pdf}, url = {https://proceedings.mlr.press/v80/odonoghue18a.html}, abstract = {We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN performance on 51 out of 57 games in the Atari suite.} }
Endnote
%0 Conference Paper %T The Uncertainty Bellman Equation and Exploration %A Brendan O’Donoghue %A Ian Osband %A Remi Munos %A Vlad Mnih %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-odonoghue18a %I PMLR %P 3839--3848 %U https://proceedings.mlr.press/v80/odonoghue18a.html %V 80 %X We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN performance on 51 out of 57 games in the Atari suite.
APA
O’Donoghue, B., Osband, I., Munos, R. & Mnih, V.. (2018). The Uncertainty Bellman Equation and Exploration. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3839-3848 Available from https://proceedings.mlr.press/v80/odonoghue18a.html.

Related Material