UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning

Tarun Gupta, Anuj Mahajan, Bei Peng, Wendelin Boehmer, Shimon Whiteson
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3930-3941, 2021.

Abstract

VDN and QMIX are two popular value-based algorithms for cooperative MARL that learn a centralized action value function as a monotonic mixing of per-agent utilities. While this enables easy decentralization of the learned policy, the restricted joint action value function can prevent them from solving tasks that require significant coordination between agents at a given timestep. We show that this problem can be overcome by improving the joint exploration of all agents during training. Specifically, we propose a novel MARL approach called Universal Value Exploration (UneVEn) that learns a set of related tasks simultaneously with a linear decomposition of universal successor features. With the policies of already solved related tasks, the joint exploration process of all agents can be improved to help them achieve better coordination. Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-gupta21a, title = {UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning}, author = {Gupta, Tarun and Mahajan, Anuj and Peng, Bei and Boehmer, Wendelin and Whiteson, Shimon}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3930--3941}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/gupta21a/gupta21a.pdf}, url = {https://proceedings.mlr.press/v139/gupta21a.html}, abstract = {VDN and QMIX are two popular value-based algorithms for cooperative MARL that learn a centralized action value function as a monotonic mixing of per-agent utilities. While this enables easy decentralization of the learned policy, the restricted joint action value function can prevent them from solving tasks that require significant coordination between agents at a given timestep. We show that this problem can be overcome by improving the joint exploration of all agents during training. Specifically, we propose a novel MARL approach called Universal Value Exploration (UneVEn) that learns a set of related tasks simultaneously with a linear decomposition of universal successor features. With the policies of already solved related tasks, the joint exploration process of all agents can be improved to help them achieve better coordination. Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.} }
Endnote
%0 Conference Paper %T UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning %A Tarun Gupta %A Anuj Mahajan %A Bei Peng %A Wendelin Boehmer %A Shimon Whiteson %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-gupta21a %I PMLR %P 3930--3941 %U https://proceedings.mlr.press/v139/gupta21a.html %V 139 %X VDN and QMIX are two popular value-based algorithms for cooperative MARL that learn a centralized action value function as a monotonic mixing of per-agent utilities. While this enables easy decentralization of the learned policy, the restricted joint action value function can prevent them from solving tasks that require significant coordination between agents at a given timestep. We show that this problem can be overcome by improving the joint exploration of all agents during training. Specifically, we propose a novel MARL approach called Universal Value Exploration (UneVEn) that learns a set of related tasks simultaneously with a linear decomposition of universal successor features. With the policies of already solved related tasks, the joint exploration process of all agents can be improved to help them achieve better coordination. Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.
APA
Gupta, T., Mahajan, A., Peng, B., Boehmer, W. & Whiteson, S.. (2021). UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3930-3941 Available from https://proceedings.mlr.press/v139/gupta21a.html.

Related Material