REPAINT: Knowledge Transfer in Deep Reinforcement Learning

Yunzhe Tao, Sahika Genc, Jonathan Chung, Tao Sun, Sunil Mallya
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10141-10152, 2021.

Abstract

Accelerating learning processes for complex tasks by leveraging previously learned tasks has been one of the most challenging problems in reinforcement learning, especially when the similarity between source and target tasks is low. This work proposes REPresentation And INstance Transfer (REPAINT) algorithm for knowledge transfer in deep reinforcement learning. REPAINT not only transfers the representation of a pre-trained teacher policy in the on-policy learning, but also uses an advantage-based experience selection approach to transfer useful samples collected following the teacher policy in the off-policy learning. Our experimental results on several benchmark tasks show that REPAINT significantly reduces the total training time in generic cases of task similarity. In particular, when the source tasks are dissimilar to, or sub-tasks of, the target tasks, REPAINT outperforms other baselines in both training-time reduction and asymptotic performance of return scores.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-tao21a, title = {REPAINT: Knowledge Transfer in Deep Reinforcement Learning}, author = {Tao, Yunzhe and Genc, Sahika and Chung, Jonathan and Sun, Tao and Mallya, Sunil}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10141--10152}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/tao21a/tao21a.pdf}, url = {https://proceedings.mlr.press/v139/tao21a.html}, abstract = {Accelerating learning processes for complex tasks by leveraging previously learned tasks has been one of the most challenging problems in reinforcement learning, especially when the similarity between source and target tasks is low. This work proposes REPresentation And INstance Transfer (REPAINT) algorithm for knowledge transfer in deep reinforcement learning. REPAINT not only transfers the representation of a pre-trained teacher policy in the on-policy learning, but also uses an advantage-based experience selection approach to transfer useful samples collected following the teacher policy in the off-policy learning. Our experimental results on several benchmark tasks show that REPAINT significantly reduces the total training time in generic cases of task similarity. In particular, when the source tasks are dissimilar to, or sub-tasks of, the target tasks, REPAINT outperforms other baselines in both training-time reduction and asymptotic performance of return scores.} }
Endnote
%0 Conference Paper %T REPAINT: Knowledge Transfer in Deep Reinforcement Learning %A Yunzhe Tao %A Sahika Genc %A Jonathan Chung %A Tao Sun %A Sunil Mallya %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-tao21a %I PMLR %P 10141--10152 %U https://proceedings.mlr.press/v139/tao21a.html %V 139 %X Accelerating learning processes for complex tasks by leveraging previously learned tasks has been one of the most challenging problems in reinforcement learning, especially when the similarity between source and target tasks is low. This work proposes REPresentation And INstance Transfer (REPAINT) algorithm for knowledge transfer in deep reinforcement learning. REPAINT not only transfers the representation of a pre-trained teacher policy in the on-policy learning, but also uses an advantage-based experience selection approach to transfer useful samples collected following the teacher policy in the off-policy learning. Our experimental results on several benchmark tasks show that REPAINT significantly reduces the total training time in generic cases of task similarity. In particular, when the source tasks are dissimilar to, or sub-tasks of, the target tasks, REPAINT outperforms other baselines in both training-time reduction and asymptotic performance of return scores.
APA
Tao, Y., Genc, S., Chung, J., Sun, T. & Mallya, S.. (2021). REPAINT: Knowledge Transfer in Deep Reinforcement Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10141-10152 Available from https://proceedings.mlr.press/v139/tao21a.html.

Related Material