CCNet: Cluster-Coordinated Net for Learning Multi-agent Communication Protocols with Reinforcement Learning

[edit]

Xin Wen, Zheng-Jun Zha, Zilei Wang, Liansheng Zhuang, Houqiang Li ;
Proceedings of The 10th Asian Conference on Machine Learning, PMLR 95:582-597, 2018.

Abstract

Multi-agent system is crucial for many practical applications. Recent years have witnessed numerous research on multi-agent task with reinforcement learning (RL) algorithms. Traditional reinforcement learning algorithms often fail to learn the cooperation between different agents, which is vital for multi-agent problems. A promising solution is to establish a communication protocol among agents. However, existing approaches often suffer from generalization challenges especially in tasks with partial observation and dynamic variation of agent amount. In this paper, we develop a Cluster-Coordinated Network (CCNet) to address the “Learning-to-communicate” problem in multi-agent system by utilizing the combination of a trainable Vector of Locally Aggregated Descriptor (VLAD) algorithm and reinforcement learning. Embedding with a VLAD based end-to-end trainable communication information processing module (called VLAD Processing Core), CCNet can learn efficient communication protocols even from scratch under partially observable environments and possesses robustness to the dynamic changes of agent number as well. Moreover, with the help of communication, CCNet is with less non-stationarity when training the network by common RL algorithms. We evaluated the proposed CCNet on two multi-agent partially observable tasks, \emph{i.e.}, Traffic Junction and Combat Task. The experimental results have demonstrated that CCNet is effective and improves the performance by a large margin over the state-of-the-art methods.

Related Material