Why is Posterior Sampling Better than Optimism for Reinforcement Learning?

Ian Osband, Benjamin Van Roy
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2701-2710, 2017.

Abstract

Computational results demonstrate that posterior sampling for reinforcement learning (PSRL) dramatically outperforms existing algorithms driven by optimism, such as UCRL2. We provide insight into the extent of this performance boost and the phenomenon that drives it. We leverage this insight to establish an $\tilde{O}(H\sqrt{SAT})$ Bayesian regret bound for PSRL in finite-horizon episodic Markov decision processes. This improves upon the best previous Bayesian regret bound of $\tilde{O}(H S \sqrt{AT})$ for any reinforcement learning algorithm. Our theoretical results are supported by extensive empirical evaluation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-osband17a, title = {Why is Posterior Sampling Better than Optimism for Reinforcement Learning?}, author = {Ian Osband and Van Roy, Benjamin}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2701--2710}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/osband17a/osband17a.pdf}, url = {https://proceedings.mlr.press/v70/osband17a.html}, abstract = {Computational results demonstrate that posterior sampling for reinforcement learning (PSRL) dramatically outperforms existing algorithms driven by optimism, such as UCRL2. We provide insight into the extent of this performance boost and the phenomenon that drives it. We leverage this insight to establish an $\tilde{O}(H\sqrt{SAT})$ Bayesian regret bound for PSRL in finite-horizon episodic Markov decision processes. This improves upon the best previous Bayesian regret bound of $\tilde{O}(H S \sqrt{AT})$ for any reinforcement learning algorithm. Our theoretical results are supported by extensive empirical evaluation.} }
Endnote
%0 Conference Paper %T Why is Posterior Sampling Better than Optimism for Reinforcement Learning? %A Ian Osband %A Benjamin Van Roy %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-osband17a %I PMLR %P 2701--2710 %U https://proceedings.mlr.press/v70/osband17a.html %V 70 %X Computational results demonstrate that posterior sampling for reinforcement learning (PSRL) dramatically outperforms existing algorithms driven by optimism, such as UCRL2. We provide insight into the extent of this performance boost and the phenomenon that drives it. We leverage this insight to establish an $\tilde{O}(H\sqrt{SAT})$ Bayesian regret bound for PSRL in finite-horizon episodic Markov decision processes. This improves upon the best previous Bayesian regret bound of $\tilde{O}(H S \sqrt{AT})$ for any reinforcement learning algorithm. Our theoretical results are supported by extensive empirical evaluation.
APA
Osband, I. & Van Roy, B.. (2017). Why is Posterior Sampling Better than Optimism for Reinforcement Learning?. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2701-2710 Available from https://proceedings.mlr.press/v70/osband17a.html.

Related Material