PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning

Iou-Jen Liu, Raymond A. Yeh, Alexander G. Schwing
; Proceedings of the Conference on Robot Learning, PMLR 100:590-602, 2020.

Abstract

Sample efficiency and scalability to a large number of agents are two important goals for multi-agent reinforcement learning systems. Recent works got us closer to those goals, addressing non-stationarity of the environment from a single agent’s perspective by utilizing a deep net critic which depends on all observations and actions. The critic input concatenates agent observations and actions in a user-specified order. However, since deep nets aren’t permutation invariant, a permuted input changes the critic output despite the environment remaining identical. To avoid this inefficiency, we propose a ‘permutation invariant critic’ (PIC), which yields identical output irrespective of the agent permutation. This consistent representation enables our model to scale to 30 times more agents and to achieve improvements of test episode reward between 15% to 50% on the challenging multi-agent particle environment (MPE).

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-liu20a, title = {PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning}, author = {Liu, Iou-Jen and Yeh, Raymond A. and Schwing, Alexander G.}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {590--602}, year = {2020}, editor = {Leslie Pack Kaelbling and Danica Kragic and Komei Sugiura}, volume = {100}, series = {Proceedings of Machine Learning Research}, address = {}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/liu20a/liu20a.pdf}, url = {http://proceedings.mlr.press/v100/liu20a.html}, abstract = {Sample efficiency and scalability to a large number of agents are two important goals for multi-agent reinforcement learning systems. Recent works got us closer to those goals, addressing non-stationarity of the environment from a single agent’s perspective by utilizing a deep net critic which depends on all observations and actions. The critic input concatenates agent observations and actions in a user-specified order. However, since deep nets aren’t permutation invariant, a permuted input changes the critic output despite the environment remaining identical. To avoid this inefficiency, we propose a ‘permutation invariant critic’ (PIC), which yields identical output irrespective of the agent permutation. This consistent representation enables our model to scale to 30 times more agents and to achieve improvements of test episode reward between 15% to 50% on the challenging multi-agent particle environment (MPE).} }
Endnote
%0 Conference Paper %T PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning %A Iou-Jen Liu %A Raymond A. Yeh %A Alexander G. Schwing %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-liu20a %I PMLR %J Proceedings of Machine Learning Research %P 590--602 %U http://proceedings.mlr.press %V 100 %W PMLR %X Sample efficiency and scalability to a large number of agents are two important goals for multi-agent reinforcement learning systems. Recent works got us closer to those goals, addressing non-stationarity of the environment from a single agent’s perspective by utilizing a deep net critic which depends on all observations and actions. The critic input concatenates agent observations and actions in a user-specified order. However, since deep nets aren’t permutation invariant, a permuted input changes the critic output despite the environment remaining identical. To avoid this inefficiency, we propose a ‘permutation invariant critic’ (PIC), which yields identical output irrespective of the agent permutation. This consistent representation enables our model to scale to 30 times more agents and to achieve improvements of test episode reward between 15% to 50% on the challenging multi-agent particle environment (MPE).
APA
Liu, I., Yeh, R.A. & Schwing, A.G.. (2020). PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning. Proceedings of the Conference on Robot Learning, in PMLR 100:590-602

Related Material