Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition

Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Anima Anandkumar
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6860-6870, 2021.

Abstract

In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team’s overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13% of the time using the adaptive communication strategy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-liu21m, title = {Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition}, author = {Liu, Bo and Liu, Qiang and Stone, Peter and Garg, Animesh and Zhu, Yuke and Anandkumar, Anima}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6860--6870}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/liu21m/liu21m.pdf}, url = {https://proceedings.mlr.press/v139/liu21m.html}, abstract = {In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team’s overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13% of the time using the adaptive communication strategy.} }
Endnote
%0 Conference Paper %T Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition %A Bo Liu %A Qiang Liu %A Peter Stone %A Animesh Garg %A Yuke Zhu %A Anima Anandkumar %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-liu21m %I PMLR %P 6860--6870 %U https://proceedings.mlr.press/v139/liu21m.html %V 139 %X In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team’s overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13% of the time using the adaptive communication strategy.
APA
Liu, B., Liu, Q., Stone, P., Garg, A., Zhu, Y. & Anandkumar, A.. (2021). Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6860-6870 Available from https://proceedings.mlr.press/v139/liu21m.html.

Related Material