A decentralized policy gradient approach to multi-task reinforcement learning

Sihan Zeng, Malik Aqeel Anwar, Thinh T. Doan, Arijit Raychowdhury, Justin Romberg
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1002-1012, 2021.

Abstract

We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method. The goal in MTRL is to learn a common policy that operates effectively in different environments; these environments have similar (or overlapping) state spaces, but have different rewards and dynamics. We highlight two fundamental challenges in MTRL that are not present in its single task counterpart, and illustrate them with simple examples. We then develop a decentralized entropyregularized policy gradient method for solving the MTRL problem, and study its finite-time convergence rate. We demonstrate the effectiveness of the proposed method using a series of numerical experiments. These experiments range from small-scale "GridWorld" problems that readily demonstrate the trade-offs involved in multi-task learning to large-scale problems, where common policies are learned to navigate an airborne drone in multiple (simulated) environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-zeng21a, title = {A decentralized policy gradient approach to multi-task reinforcement learning}, author = {Zeng, Sihan and Anwar, Malik Aqeel and Doan, Thinh T. and Raychowdhury, Arijit and Romberg, Justin}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1002--1012}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/zeng21a/zeng21a.pdf}, url = {https://proceedings.mlr.press/v161/zeng21a.html}, abstract = {We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method. The goal in MTRL is to learn a common policy that operates effectively in different environments; these environments have similar (or overlapping) state spaces, but have different rewards and dynamics. We highlight two fundamental challenges in MTRL that are not present in its single task counterpart, and illustrate them with simple examples. We then develop a decentralized entropyregularized policy gradient method for solving the MTRL problem, and study its finite-time convergence rate. We demonstrate the effectiveness of the proposed method using a series of numerical experiments. These experiments range from small-scale "GridWorld" problems that readily demonstrate the trade-offs involved in multi-task learning to large-scale problems, where common policies are learned to navigate an airborne drone in multiple (simulated) environments.} }
Endnote
%0 Conference Paper %T A decentralized policy gradient approach to multi-task reinforcement learning %A Sihan Zeng %A Malik Aqeel Anwar %A Thinh T. Doan %A Arijit Raychowdhury %A Justin Romberg %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-zeng21a %I PMLR %P 1002--1012 %U https://proceedings.mlr.press/v161/zeng21a.html %V 161 %X We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method. The goal in MTRL is to learn a common policy that operates effectively in different environments; these environments have similar (or overlapping) state spaces, but have different rewards and dynamics. We highlight two fundamental challenges in MTRL that are not present in its single task counterpart, and illustrate them with simple examples. We then develop a decentralized entropyregularized policy gradient method for solving the MTRL problem, and study its finite-time convergence rate. We demonstrate the effectiveness of the proposed method using a series of numerical experiments. These experiments range from small-scale "GridWorld" problems that readily demonstrate the trade-offs involved in multi-task learning to large-scale problems, where common policies are learned to navigate an airborne drone in multiple (simulated) environments.
APA
Zeng, S., Anwar, M.A., Doan, T.T., Raychowdhury, A. & Romberg, J.. (2021). A decentralized policy gradient approach to multi-task reinforcement learning. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1002-1012 Available from https://proceedings.mlr.press/v161/zeng21a.html.

Related Material