Model-Based Uncertainty in Value Functions

Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:8029-8052, 2023.

Abstract

We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning. In particular, we focus on characterizing the variance over values induced by a distribution over MDPs. Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation, but the over-approximation may result in inefficient exploration. We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values and explicitly characterizes the gap in previous work. Moreover, our uncertainty quantification technique is easily integrated into common exploration strategies and scales naturally beyond the tabular setting by using standard deep reinforcement learning architectures. Experiments in difficult exploration tasks, both in tabular and continuous control settings, show that our sharper uncertainty estimates improve sample-efficiency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-luis23a, title = {Model-Based Uncertainty in Value Functions}, author = {Luis, Carlos E. and Bottero, Alessandro G. and Vinogradska, Julia and Berkenkamp, Felix and Peters, Jan}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {8029--8052}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/luis23a/luis23a.pdf}, url = {https://proceedings.mlr.press/v206/luis23a.html}, abstract = {We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning. In particular, we focus on characterizing the variance over values induced by a distribution over MDPs. Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation, but the over-approximation may result in inefficient exploration. We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values and explicitly characterizes the gap in previous work. Moreover, our uncertainty quantification technique is easily integrated into common exploration strategies and scales naturally beyond the tabular setting by using standard deep reinforcement learning architectures. Experiments in difficult exploration tasks, both in tabular and continuous control settings, show that our sharper uncertainty estimates improve sample-efficiency.} }
Endnote
%0 Conference Paper %T Model-Based Uncertainty in Value Functions %A Carlos E. Luis %A Alessandro G. Bottero %A Julia Vinogradska %A Felix Berkenkamp %A Jan Peters %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-luis23a %I PMLR %P 8029--8052 %U https://proceedings.mlr.press/v206/luis23a.html %V 206 %X We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning. In particular, we focus on characterizing the variance over values induced by a distribution over MDPs. Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation, but the over-approximation may result in inefficient exploration. We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values and explicitly characterizes the gap in previous work. Moreover, our uncertainty quantification technique is easily integrated into common exploration strategies and scales naturally beyond the tabular setting by using standard deep reinforcement learning architectures. Experiments in difficult exploration tasks, both in tabular and continuous control settings, show that our sharper uncertainty estimates improve sample-efficiency.
APA
Luis, C.E., Bottero, A.G., Vinogradska, J., Berkenkamp, F. & Peters, J.. (2023). Model-Based Uncertainty in Value Functions. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:8029-8052 Available from https://proceedings.mlr.press/v206/luis23a.html.

Related Material