Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning

Diyi Hu, Chi Zhang, Viktor Prasanna, Bhaskar Krishnamachari
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:467-482, 2023.

Abstract

In Multi-Agent Reinforcement Learning, communication is critical to encourage cooperation among agents. Communication in realistic wireless networks can be highly unreliable due to network conditions varying with agents’ mobility, and stochasticity in the transmission process. We propose a framework to learn practical communication strategies by addressing three fundamental questions: (1) \emph{When}: Agents learn the timing of communication based on not only message importance but also wireless channel conditions. (2) \emph{What}: Agents augment message contents with wireless network measurements to better select the game and communication actions. (3) \emph{How}: Agents use a novel neural message encoder to preserve all information from received messages, regardless of the number and order of messages. Simulating standard benchmarks under realistic wireless network settings, we show significant improvements in game performance, convergence speed and communication efficiency compared with state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-hu23a, title = {Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning}, author = {Hu, Diyi and Zhang, Chi and Prasanna, Viktor and Krishnamachari, Bhaskar}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {467--482}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/hu23a/hu23a.pdf}, url = {https://proceedings.mlr.press/v189/hu23a.html}, abstract = {In Multi-Agent Reinforcement Learning, communication is critical to encourage cooperation among agents. Communication in realistic wireless networks can be highly unreliable due to network conditions varying with agents’ mobility, and stochasticity in the transmission process. We propose a framework to learn practical communication strategies by addressing three fundamental questions: (1) \emph{When}: Agents learn the timing of communication based on not only message importance but also wireless channel conditions. (2) \emph{What}: Agents augment message contents with wireless network measurements to better select the game and communication actions. (3) \emph{How}: Agents use a novel neural message encoder to preserve all information from received messages, regardless of the number and order of messages. Simulating standard benchmarks under realistic wireless network settings, we show significant improvements in game performance, convergence speed and communication efficiency compared with state-of-the-art.} }
Endnote
%0 Conference Paper %T Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning %A Diyi Hu %A Chi Zhang %A Viktor Prasanna %A Bhaskar Krishnamachari %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-hu23a %I PMLR %P 467--482 %U https://proceedings.mlr.press/v189/hu23a.html %V 189 %X In Multi-Agent Reinforcement Learning, communication is critical to encourage cooperation among agents. Communication in realistic wireless networks can be highly unreliable due to network conditions varying with agents’ mobility, and stochasticity in the transmission process. We propose a framework to learn practical communication strategies by addressing three fundamental questions: (1) \emph{When}: Agents learn the timing of communication based on not only message importance but also wireless channel conditions. (2) \emph{What}: Agents augment message contents with wireless network measurements to better select the game and communication actions. (3) \emph{How}: Agents use a novel neural message encoder to preserve all information from received messages, regardless of the number and order of messages. Simulating standard benchmarks under realistic wireless network settings, we show significant improvements in game performance, convergence speed and communication efficiency compared with state-of-the-art.
APA
Hu, D., Zhang, C., Prasanna, V. & Krishnamachari, B.. (2023). Learning Practical Communication Strategies in Cooperative Multi-Agent Reinforcement Learning. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:467-482 Available from https://proceedings.mlr.press/v189/hu23a.html.

Related Material