Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents

Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Basar
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5872-5881, 2018.

Abstract

We consider the fully decentralized multi-agent reinforcement learning (MARL) problem, where the agents are connected via a time-varying and possibly sparse communication network. Specifically, we assume that the reward functions of the agents might correspond to different tasks, and are only known to the corresponding agent. Moreover, each agent makes individual decisions based on both the information observed locally and the messages received from its neighbors over the network. To maximize the globally averaged return over the network, we propose two fully decentralized actor-critic algorithms, which are applicable to large-scale MARL problems in an online fashion. Convergence guarantees are provided when the value functions are approximated within the class of linear functions. Our work appears to be the first theoretical study of fully decentralized MARL algorithms for networked agents that use function approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-zhang18n, title = {Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents}, author = {Zhang, Kaiqing and Yang, Zhuoran and Liu, Han and Zhang, Tong and Basar, Tamer}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5872--5881}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/zhang18n/zhang18n.pdf}, url = {https://proceedings.mlr.press/v80/zhang18n.html}, abstract = {We consider the fully decentralized multi-agent reinforcement learning (MARL) problem, where the agents are connected via a time-varying and possibly sparse communication network. Specifically, we assume that the reward functions of the agents might correspond to different tasks, and are only known to the corresponding agent. Moreover, each agent makes individual decisions based on both the information observed locally and the messages received from its neighbors over the network. To maximize the globally averaged return over the network, we propose two fully decentralized actor-critic algorithms, which are applicable to large-scale MARL problems in an online fashion. Convergence guarantees are provided when the value functions are approximated within the class of linear functions. Our work appears to be the first theoretical study of fully decentralized MARL algorithms for networked agents that use function approximation.} }
Endnote
%0 Conference Paper %T Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents %A Kaiqing Zhang %A Zhuoran Yang %A Han Liu %A Tong Zhang %A Tamer Basar %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-zhang18n %I PMLR %P 5872--5881 %U https://proceedings.mlr.press/v80/zhang18n.html %V 80 %X We consider the fully decentralized multi-agent reinforcement learning (MARL) problem, where the agents are connected via a time-varying and possibly sparse communication network. Specifically, we assume that the reward functions of the agents might correspond to different tasks, and are only known to the corresponding agent. Moreover, each agent makes individual decisions based on both the information observed locally and the messages received from its neighbors over the network. To maximize the globally averaged return over the network, we propose two fully decentralized actor-critic algorithms, which are applicable to large-scale MARL problems in an online fashion. Convergence guarantees are provided when the value functions are approximated within the class of linear functions. Our work appears to be the first theoretical study of fully decentralized MARL algorithms for networked agents that use function approximation.
APA
Zhang, K., Yang, Z., Liu, H., Zhang, T. & Basar, T.. (2018). Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5872-5881 Available from https://proceedings.mlr.press/v80/zhang18n.html.

Related Material