Provable Benefits of Representational Transfer in Reinforcement Learning

Alekh Agarwal, Yuda Song, Wen Sun, Kaiwen Wang, Mengdi Wang, Xuezhou Zhang
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:2114-2187, 2023.

Abstract

We study the problem of representational transfer in RL, where an agent first pretrains in a number of \emph{source tasks} to discover a shared representation, which is subsequently used to learn a good policy in a \emph{target task}. We propose a new notion of task relatedness between source and target tasks, and develop a novel approach for representational transfer under this assumption. Concretely, we show that given a generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy in the target task. The sample complexity is close to knowing the ground truth features in the target task, and comparable to prior representation learning results in the source tasks. We complement our positive results with lower bounds without generative access, and validate our findings with empirical evaluation on rich observation MDPs that require deep exploration. In our experiments, we observe speed up in learning in the target by pre-training, and also validate the need for generative access in source tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-agarwal23b, title = {Provable Benefits of Representational Transfer in Reinforcement Learning}, author = {Agarwal, Alekh and Song, Yuda and Sun, Wen and Wang, Kaiwen and Wang, Mengdi and Zhang, Xuezhou}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {2114--2187}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/agarwal23b/agarwal23b.pdf}, url = {https://proceedings.mlr.press/v195/agarwal23b.html}, abstract = {We study the problem of representational transfer in RL, where an agent first pretrains in a number of \emph{source tasks} to discover a shared representation, which is subsequently used to learn a good policy in a \emph{target task}. We propose a new notion of task relatedness between source and target tasks, and develop a novel approach for representational transfer under this assumption. Concretely, we show that given a generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy in the target task. The sample complexity is close to knowing the ground truth features in the target task, and comparable to prior representation learning results in the source tasks. We complement our positive results with lower bounds without generative access, and validate our findings with empirical evaluation on rich observation MDPs that require deep exploration. In our experiments, we observe speed up in learning in the target by pre-training, and also validate the need for generative access in source tasks.} }
Endnote
%0 Conference Paper %T Provable Benefits of Representational Transfer in Reinforcement Learning %A Alekh Agarwal %A Yuda Song %A Wen Sun %A Kaiwen Wang %A Mengdi Wang %A Xuezhou Zhang %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-agarwal23b %I PMLR %P 2114--2187 %U https://proceedings.mlr.press/v195/agarwal23b.html %V 195 %X We study the problem of representational transfer in RL, where an agent first pretrains in a number of \emph{source tasks} to discover a shared representation, which is subsequently used to learn a good policy in a \emph{target task}. We propose a new notion of task relatedness between source and target tasks, and develop a novel approach for representational transfer under this assumption. Concretely, we show that given a generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy in the target task. The sample complexity is close to knowing the ground truth features in the target task, and comparable to prior representation learning results in the source tasks. We complement our positive results with lower bounds without generative access, and validate our findings with empirical evaluation on rich observation MDPs that require deep exploration. In our experiments, we observe speed up in learning in the target by pre-training, and also validate the need for generative access in source tasks.
APA
Agarwal, A., Song, Y., Sun, W., Wang, K., Wang, M. & Zhang, X.. (2023). Provable Benefits of Representational Transfer in Reinforcement Learning. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:2114-2187 Available from https://proceedings.mlr.press/v195/agarwal23b.html.

Related Material