Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing

Filippos Christianos, Georgios Papoudakis, Muhammad A Rahman, Stefano V Albrecht
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1989-1998, 2021.

Abstract

Sharing parameters in multi-agent deep reinforcement learning has played an essential role in allowing algorithms to scale to a large number of agents. Parameter sharing between agents significantly decreases the number of trainable parameters, shortening training times to tractable levels, and has been linked to more efficient learning. However, having all agents share the same parameters can also have a detrimental effect on learning. We demonstrate the impact of parameter sharing methods on training speed and converged returns, establishing that when applied indiscriminately, their effectiveness is highly dependent on the environment. We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. Our approach combines the increased sample efficiency of parameter sharing with the representational capacity of multiple independent networks to reduce training time and increase final returns.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-christianos21a, title = {Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing}, author = {Christianos, Filippos and Papoudakis, Georgios and Rahman, Muhammad A and Albrecht, Stefano V}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1989--1998}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/christianos21a/christianos21a.pdf}, url = {https://proceedings.mlr.press/v139/christianos21a.html}, abstract = {Sharing parameters in multi-agent deep reinforcement learning has played an essential role in allowing algorithms to scale to a large number of agents. Parameter sharing between agents significantly decreases the number of trainable parameters, shortening training times to tractable levels, and has been linked to more efficient learning. However, having all agents share the same parameters can also have a detrimental effect on learning. We demonstrate the impact of parameter sharing methods on training speed and converged returns, establishing that when applied indiscriminately, their effectiveness is highly dependent on the environment. We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. Our approach combines the increased sample efficiency of parameter sharing with the representational capacity of multiple independent networks to reduce training time and increase final returns.} }
Endnote
%0 Conference Paper %T Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing %A Filippos Christianos %A Georgios Papoudakis %A Muhammad A Rahman %A Stefano V Albrecht %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-christianos21a %I PMLR %P 1989--1998 %U https://proceedings.mlr.press/v139/christianos21a.html %V 139 %X Sharing parameters in multi-agent deep reinforcement learning has played an essential role in allowing algorithms to scale to a large number of agents. Parameter sharing between agents significantly decreases the number of trainable parameters, shortening training times to tractable levels, and has been linked to more efficient learning. However, having all agents share the same parameters can also have a detrimental effect on learning. We demonstrate the impact of parameter sharing methods on training speed and converged returns, establishing that when applied indiscriminately, their effectiveness is highly dependent on the environment. We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. Our approach combines the increased sample efficiency of parameter sharing with the representational capacity of multiple independent networks to reduce training time and increase final returns.
APA
Christianos, F., Papoudakis, G., Rahman, M.A. & Albrecht, S.V.. (2021). Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1989-1998 Available from https://proceedings.mlr.press/v139/christianos21a.html.

Related Material