Root-n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank

Kefan Dong, Jian Peng, Yining Wang, Yuan Zhou
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1554-1557, 2020.

Abstract

In this paper, we consider the problem of online learning of Markov decision processes (MDPs) with very large state spaces. Under the assumptions of realizable function approximation and low Bellman ranks, we develop an online learning algorithm that learns the optimal value function while at the same time achieving very low cumulative regret during the learning process. Our learning algorithm, Adaptive Value-function Elimination (AVE), is inspired by the policy elimination algorithm proposed in (Jiang et al., 2017), known as OLIVE. One of our key technical contributions in AVE is to formulate the elimination steps in OLIVE as contextual bandit problems. This technique enables us to apply the active elimination and expert weighting methods from (Dudik et al., 2011), instead of the random action exploration scheme used in the original OLIVE algorithm, for more efficient exploration and better control of the regret incurred in each policy elimination step. To the best of our knowledge, this is the first root-n-regret result for reinforcement learning in stochastic MDPs with general value function approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-dong20a, title = {Root-n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank}, author = {Dong, Kefan and Peng, Jian and Wang, Yining and Zhou, Yuan}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {1554--1557}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/dong20a/dong20a.pdf}, url = {https://proceedings.mlr.press/v125/dong20a.html}, abstract = { In this paper, we consider the problem of online learning of Markov decision processes (MDPs) with very large state spaces. Under the assumptions of realizable function approximation and low Bellman ranks, we develop an online learning algorithm that learns the optimal value function while at the same time achieving very low cumulative regret during the learning process. Our learning algorithm, Adaptive Value-function Elimination (AVE), is inspired by the policy elimination algorithm proposed in (Jiang et al., 2017), known as OLIVE. One of our key technical contributions in AVE is to formulate the elimination steps in OLIVE as contextual bandit problems. This technique enables us to apply the active elimination and expert weighting methods from (Dudik et al., 2011), instead of the random action exploration scheme used in the original OLIVE algorithm, for more efficient exploration and better control of the regret incurred in each policy elimination step. To the best of our knowledge, this is the first root-n-regret result for reinforcement learning in stochastic MDPs with general value function approximation.} }
Endnote
%0 Conference Paper %T Root-n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank %A Kefan Dong %A Jian Peng %A Yining Wang %A Yuan Zhou %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-dong20a %I PMLR %P 1554--1557 %U https://proceedings.mlr.press/v125/dong20a.html %V 125 %X In this paper, we consider the problem of online learning of Markov decision processes (MDPs) with very large state spaces. Under the assumptions of realizable function approximation and low Bellman ranks, we develop an online learning algorithm that learns the optimal value function while at the same time achieving very low cumulative regret during the learning process. Our learning algorithm, Adaptive Value-function Elimination (AVE), is inspired by the policy elimination algorithm proposed in (Jiang et al., 2017), known as OLIVE. One of our key technical contributions in AVE is to formulate the elimination steps in OLIVE as contextual bandit problems. This technique enables us to apply the active elimination and expert weighting methods from (Dudik et al., 2011), instead of the random action exploration scheme used in the original OLIVE algorithm, for more efficient exploration and better control of the regret incurred in each policy elimination step. To the best of our knowledge, this is the first root-n-regret result for reinforcement learning in stochastic MDPs with general value function approximation.
APA
Dong, K., Peng, J., Wang, Y. & Zhou, Y.. (2020). Root-n-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:1554-1557 Available from https://proceedings.mlr.press/v125/dong20a.html.

Related Material