Modeling Others using Oneself in Multi-Agent Reinforcement Learning

Roberta Raileanu, Emily Denton, Arthur Szlam, Rob Fergus
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4257-4266, 2018.

Abstract

We consider the multi-agent reinforcement learning setting with imperfect information. The reward function depends on the hidden goals of both agents, so the agents must infer the other players’ goals from their observed behavior in order to maximize their returns. We propose a new approach for learning in these domains: Self Other-Modeling (SOM), in which an agent uses its own policy to predict the other agent’s actions and update its belief of their hidden goal in an online manner. We evaluate this approach on three different tasks and show that the agents are able to learn better policies using their estimate of the other players’ goals, in both cooperative and competitive settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-raileanu18a, title = {Modeling Others using Oneself in Multi-Agent Reinforcement Learning}, author = {Raileanu, Roberta and Denton, Emily and Szlam, Arthur and Fergus, Rob}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4257--4266}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/raileanu18a/raileanu18a.pdf}, url = {https://proceedings.mlr.press/v80/raileanu18a.html}, abstract = {We consider the multi-agent reinforcement learning setting with imperfect information. The reward function depends on the hidden goals of both agents, so the agents must infer the other players’ goals from their observed behavior in order to maximize their returns. We propose a new approach for learning in these domains: Self Other-Modeling (SOM), in which an agent uses its own policy to predict the other agent’s actions and update its belief of their hidden goal in an online manner. We evaluate this approach on three different tasks and show that the agents are able to learn better policies using their estimate of the other players’ goals, in both cooperative and competitive settings.} }
Endnote
%0 Conference Paper %T Modeling Others using Oneself in Multi-Agent Reinforcement Learning %A Roberta Raileanu %A Emily Denton %A Arthur Szlam %A Rob Fergus %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-raileanu18a %I PMLR %P 4257--4266 %U https://proceedings.mlr.press/v80/raileanu18a.html %V 80 %X We consider the multi-agent reinforcement learning setting with imperfect information. The reward function depends on the hidden goals of both agents, so the agents must infer the other players’ goals from their observed behavior in order to maximize their returns. We propose a new approach for learning in these domains: Self Other-Modeling (SOM), in which an agent uses its own policy to predict the other agent’s actions and update its belief of their hidden goal in an online manner. We evaluate this approach on three different tasks and show that the agents are able to learn better policies using their estimate of the other players’ goals, in both cooperative and competitive settings.
APA
Raileanu, R., Denton, E., Szlam, A. & Fergus, R.. (2018). Modeling Others using Oneself in Multi-Agent Reinforcement Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4257-4266 Available from https://proceedings.mlr.press/v80/raileanu18a.html.

Related Material