Cooperative Exploration for Multi-Agent Deep Reinforcement Learning

Iou-Jen Liu, Unnat Jain, Raymond A Yeh, Alexander Schwing
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6826-6836, 2021.

Abstract

Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces by a normalized entropy-based technique. Then, agents are trained to reach the goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-liu21j, title = {Cooperative Exploration for Multi-Agent Deep Reinforcement Learning}, author = {Liu, Iou-Jen and Jain, Unnat and Yeh, Raymond A and Schwing, Alexander}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6826--6836}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/liu21j/liu21j.pdf}, url = {https://proceedings.mlr.press/v139/liu21j.html}, abstract = {Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces by a normalized entropy-based technique. Then, agents are trained to reach the goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).} }
Endnote
%0 Conference Paper %T Cooperative Exploration for Multi-Agent Deep Reinforcement Learning %A Iou-Jen Liu %A Unnat Jain %A Raymond A Yeh %A Alexander Schwing %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-liu21j %I PMLR %P 6826--6836 %U https://proceedings.mlr.press/v139/liu21j.html %V 139 %X Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces by a normalized entropy-based technique. Then, agents are trained to reach the goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).
APA
Liu, I., Jain, U., Yeh, R.A. & Schwing, A.. (2021). Cooperative Exploration for Multi-Agent Deep Reinforcement Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6826-6836 Available from https://proceedings.mlr.press/v139/liu21j.html.

Related Material