Optimistic Policy Optimization with Bandit Feedback

Lior Shani, Yonathan Efroni, Aviv Rosenberg, Shie Mannor
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8604-8613, 2020.

Abstract

Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic trust region policy optimization (TRPO) algorithm for which we establish $\tilde O(\sqrt{S^2 A H^4 K})$ regret for stochastic rewards. Furthermore, we prove $\tilde O( \sqrt{ S^2 A H^4 } K^{2/3} ) $ regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-shani20a, title = {Optimistic Policy Optimization with Bandit Feedback}, author = {Shani, Lior and Efroni, Yonathan and Rosenberg, Aviv and Mannor, Shie}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8604--8613}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/shani20a/shani20a.pdf}, url = {https://proceedings.mlr.press/v119/shani20a.html}, abstract = {Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic trust region policy optimization (TRPO) algorithm for which we establish $\tilde O(\sqrt{S^2 A H^4 K})$ regret for stochastic rewards. Furthermore, we prove $\tilde O( \sqrt{ S^2 A H^4 } K^{2/3} ) $ regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.} }
Endnote
%0 Conference Paper %T Optimistic Policy Optimization with Bandit Feedback %A Lior Shani %A Yonathan Efroni %A Aviv Rosenberg %A Shie Mannor %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-shani20a %I PMLR %P 8604--8613 %U https://proceedings.mlr.press/v119/shani20a.html %V 119 %X Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic trust region policy optimization (TRPO) algorithm for which we establish $\tilde O(\sqrt{S^2 A H^4 K})$ regret for stochastic rewards. Furthermore, we prove $\tilde O( \sqrt{ S^2 A H^4 } K^{2/3} ) $ regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.
APA
Shani, L., Efroni, Y., Rosenberg, A. & Mannor, S.. (2020). Optimistic Policy Optimization with Bandit Feedback. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8604-8613 Available from https://proceedings.mlr.press/v119/shani20a.html.

Related Material