Logical Team Q-learning: An approach towards factored policies in cooperative MARL

Lucas Cassano, Ali H. Sayed
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:667-675, 2021.

Abstract

We address the challenge of learning factored policies in cooperative MARL scenarios. In particular, we consider the situation in which a team of agents collaborates to optimize a common cost. The goal is to obtain factored policies that determine the individual behavior of each agent so that the resulting joint policy is optimal. The main contribution of this work is the introduction of Logical Team Q-learning (LTQL). LTQL does not rely on assumptions about the environment and hence is generally applicable to any collaborative MARL scenario. We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work. We conclude the paper by providing experiments (both in the tabular and deep settings) that illustrate the claims.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-cassano21a, title = { Logical Team Q-learning: An approach towards factored policies in cooperative MARL }, author = {Cassano, Lucas and H. Sayed, Ali}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {667--675}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/cassano21a/cassano21a.pdf}, url = {https://proceedings.mlr.press/v130/cassano21a.html}, abstract = { We address the challenge of learning factored policies in cooperative MARL scenarios. In particular, we consider the situation in which a team of agents collaborates to optimize a common cost. The goal is to obtain factored policies that determine the individual behavior of each agent so that the resulting joint policy is optimal. The main contribution of this work is the introduction of Logical Team Q-learning (LTQL). LTQL does not rely on assumptions about the environment and hence is generally applicable to any collaborative MARL scenario. We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work. We conclude the paper by providing experiments (both in the tabular and deep settings) that illustrate the claims. } }
Endnote
%0 Conference Paper %T Logical Team Q-learning: An approach towards factored policies in cooperative MARL %A Lucas Cassano %A Ali H. Sayed %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-cassano21a %I PMLR %P 667--675 %U https://proceedings.mlr.press/v130/cassano21a.html %V 130 %X We address the challenge of learning factored policies in cooperative MARL scenarios. In particular, we consider the situation in which a team of agents collaborates to optimize a common cost. The goal is to obtain factored policies that determine the individual behavior of each agent so that the resulting joint policy is optimal. The main contribution of this work is the introduction of Logical Team Q-learning (LTQL). LTQL does not rely on assumptions about the environment and hence is generally applicable to any collaborative MARL scenario. We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work. We conclude the paper by providing experiments (both in the tabular and deep settings) that illustrate the claims.
APA
Cassano, L. & H. Sayed, A.. (2021). Logical Team Q-learning: An approach towards factored policies in cooperative MARL . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:667-675 Available from https://proceedings.mlr.press/v130/cassano21a.html.

Related Material