Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches

Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2898-2933, 2019.

Abstract

We study the sample complexity of model-based reinforcement learning (henceforth RL) in general contextual decision processes that require strategic exploration to find a near-optimal policy. We design new algorithms for RL with a generic model class and analyze their statistical properties. Our algorithms have sample complexity governed by a new structural parameter called the witness rank, which we show to be small in several settings of interest, including factored MDPs. We also show that the witness rank is never larger than the recently proposed Bellman rank parameter governing the sample complexity of the model-free algorithm OLIVE (Jiang et al., 2017), the only other provably sample-efficient algorithm for global exploration at this level of generality. Focusing on the special case of factored MDPs, we prove an exponential lower bound for a general class of model-free approaches, including OLIVE, which, when combined with our algorithmic results, demonstrates exponential separation between model-based and model-free RL in some rich-observation settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-sun19a, title = {Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches}, author = {Sun, Wen and Jiang, Nan and Krishnamurthy, Akshay and Agarwal, Alekh and Langford, John}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {2898--2933}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/sun19a/sun19a.pdf}, url = {https://proceedings.mlr.press/v99/sun19a.html}, abstract = {We study the sample complexity of model-based reinforcement learning (henceforth RL) in general contextual decision processes that require strategic exploration to find a near-optimal policy. We design new algorithms for RL with a generic model class and analyze their statistical properties. Our algorithms have sample complexity governed by a new structural parameter called the witness rank, which we show to be small in several settings of interest, including factored MDPs. We also show that the witness rank is never larger than the recently proposed Bellman rank parameter governing the sample complexity of the model-free algorithm OLIVE (Jiang et al., 2017), the only other provably sample-efficient algorithm for global exploration at this level of generality. Focusing on the special case of factored MDPs, we prove an exponential lower bound for a general class of model-free approaches, including OLIVE, which, when combined with our algorithmic results, demonstrates exponential separation between model-based and model-free RL in some rich-observation settings.} }
Endnote
%0 Conference Paper %T Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches %A Wen Sun %A Nan Jiang %A Akshay Krishnamurthy %A Alekh Agarwal %A John Langford %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-sun19a %I PMLR %P 2898--2933 %U https://proceedings.mlr.press/v99/sun19a.html %V 99 %X We study the sample complexity of model-based reinforcement learning (henceforth RL) in general contextual decision processes that require strategic exploration to find a near-optimal policy. We design new algorithms for RL with a generic model class and analyze their statistical properties. Our algorithms have sample complexity governed by a new structural parameter called the witness rank, which we show to be small in several settings of interest, including factored MDPs. We also show that the witness rank is never larger than the recently proposed Bellman rank parameter governing the sample complexity of the model-free algorithm OLIVE (Jiang et al., 2017), the only other provably sample-efficient algorithm for global exploration at this level of generality. Focusing on the special case of factored MDPs, we prove an exponential lower bound for a general class of model-free approaches, including OLIVE, which, when combined with our algorithmic results, demonstrates exponential separation between model-based and model-free RL in some rich-observation settings.
APA
Sun, W., Jiang, N., Krishnamurthy, A., Agarwal, A. & Langford, J.. (2019). Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:2898-2933 Available from https://proceedings.mlr.press/v99/sun19a.html.

Related Material