Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems

Guannan Qu, Adam Wierman, Na Li
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:256-266, 2020.

Abstract

We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we propose a Scalable Actor Critic (SAC) framework that exploits the network structure and finds a localized policy that is an $O(\rho^\kappa)$-approximation of a stationary point of the objective for some $\rho\in(0,1)$, with complexity that scales with the local state-action space size of the largest $\kappa$-hop neighborhood of the network.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-qu20a, title = {Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems}, author = {Qu, Guannan and Wierman, Adam and Li, Na}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {256--266}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/qu20a/qu20a.pdf}, url = {https://proceedings.mlr.press/v120/qu20a.html}, abstract = {We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we propose a Scalable Actor Critic (SAC) framework that exploits the network structure and finds a localized policy that is an $O(\rho^\kappa)$-approximation of a stationary point of the objective for some $\rho\in(0,1)$, with complexity that scales with the local state-action space size of the largest $\kappa$-hop neighborhood of the network. } }
Endnote
%0 Conference Paper %T Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems %A Guannan Qu %A Adam Wierman %A Na Li %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-qu20a %I PMLR %P 256--266 %U https://proceedings.mlr.press/v120/qu20a.html %V 120 %X We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we propose a Scalable Actor Critic (SAC) framework that exploits the network structure and finds a localized policy that is an $O(\rho^\kappa)$-approximation of a stationary point of the objective for some $\rho\in(0,1)$, with complexity that scales with the local state-action space size of the largest $\kappa$-hop neighborhood of the network.
APA
Qu, G., Wierman, A. & Li, N.. (2020). Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:256-266 Available from https://proceedings.mlr.press/v120/qu20a.html.

Related Material