Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information

Yonathan Efroni, Dylan J Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5062-5127, 2022.

Abstract

In real-world reinforcement learning applications the learner’s observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand. Learning from high-dimensional observations has been the subject of extensive investigation in supervised learning and statistics (e.g., via sparsity), but analogous issues in reinforcement learning are not well understood, even in finite state/action (tabular) domains. We introduce a new problem setting for reinforcement learning, the Exogenous Markov Decision Process (ExMDP), in which the state space admits an (unknown) factorization into a small controllable (or, endogenous) component and a large irrelevant (or, exogenous) component; the exogenous component is independent of the learner’s actions, but evolves in an arbitrary, temporally correlated fashion. We provide a new algorithm, OSSR, which learns a near-optimal policy with sample complexity polynomial in the size of the endogenous component and nearly independent of the size of the exogenous component, thereby offering a doubly-exponential improvement over off-the-shelf algorithms. Our results highlight for the first time that sample-efficient reinforcement learning is possible in the presence of exogenous information, and provide a simple, user-friendly benchmark for investigation going forward.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-efroni22a, title = {Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information}, author = {Efroni, Yonathan and Foster, Dylan J and Misra, Dipendra and Krishnamurthy, Akshay and Langford, John}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {5062--5127}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/efroni22a/efroni22a.pdf}, url = {https://proceedings.mlr.press/v178/efroni22a.html}, abstract = {In real-world reinforcement learning applications the learner’s observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand. Learning from high-dimensional observations has been the subject of extensive investigation in supervised learning and statistics (e.g., via sparsity), but analogous issues in reinforcement learning are not well understood, even in finite state/action (tabular) domains. We introduce a new problem setting for reinforcement learning, the Exogenous Markov Decision Process (ExMDP), in which the state space admits an (unknown) factorization into a small controllable (or, endogenous) component and a large irrelevant (or, exogenous) component; the exogenous component is independent of the learner’s actions, but evolves in an arbitrary, temporally correlated fashion. We provide a new algorithm, OSSR, which learns a near-optimal policy with sample complexity polynomial in the size of the endogenous component and nearly independent of the size of the exogenous component, thereby offering a doubly-exponential improvement over off-the-shelf algorithms. Our results highlight for the first time that sample-efficient reinforcement learning is possible in the presence of exogenous information, and provide a simple, user-friendly benchmark for investigation going forward.} }
Endnote
%0 Conference Paper %T Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information %A Yonathan Efroni %A Dylan J Foster %A Dipendra Misra %A Akshay Krishnamurthy %A John Langford %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-efroni22a %I PMLR %P 5062--5127 %U https://proceedings.mlr.press/v178/efroni22a.html %V 178 %X In real-world reinforcement learning applications the learner’s observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand. Learning from high-dimensional observations has been the subject of extensive investigation in supervised learning and statistics (e.g., via sparsity), but analogous issues in reinforcement learning are not well understood, even in finite state/action (tabular) domains. We introduce a new problem setting for reinforcement learning, the Exogenous Markov Decision Process (ExMDP), in which the state space admits an (unknown) factorization into a small controllable (or, endogenous) component and a large irrelevant (or, exogenous) component; the exogenous component is independent of the learner’s actions, but evolves in an arbitrary, temporally correlated fashion. We provide a new algorithm, OSSR, which learns a near-optimal policy with sample complexity polynomial in the size of the endogenous component and nearly independent of the size of the exogenous component, thereby offering a doubly-exponential improvement over off-the-shelf algorithms. Our results highlight for the first time that sample-efficient reinforcement learning is possible in the presence of exogenous information, and provide a simple, user-friendly benchmark for investigation going forward.
APA
Efroni, Y., Foster, D.J., Misra, D., Krishnamurthy, A. & Langford, J.. (2022). Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:5062-5127 Available from https://proceedings.mlr.press/v178/efroni22a.html.

Related Material