Multi-Agent Reinforcement Learning with Multi-Step Generative Models

Orr Krupnik, Igor Mordatch, Aviv Tamar
Proceedings of the Conference on Robot Learning, PMLR 100:776-790, 2020.

Abstract

We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems – an important domain for robots inter-acting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-krupnik20a, title = {Multi-Agent Reinforcement Learning with Multi-Step Generative Models}, author = {Krupnik, Orr and Mordatch, Igor and Tamar, Aviv}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {776--790}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/krupnik20a/krupnik20a.pdf}, url = {https://proceedings.mlr.press/v100/krupnik20a.html}, abstract = {We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems – an important domain for robots inter-acting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.} }
Endnote
%0 Conference Paper %T Multi-Agent Reinforcement Learning with Multi-Step Generative Models %A Orr Krupnik %A Igor Mordatch %A Aviv Tamar %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-krupnik20a %I PMLR %P 776--790 %U https://proceedings.mlr.press/v100/krupnik20a.html %V 100 %X We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems – an important domain for robots inter-acting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.
APA
Krupnik, O., Mordatch, I. & Tamar, A.. (2020). Multi-Agent Reinforcement Learning with Multi-Step Generative Models. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:776-790 Available from https://proceedings.mlr.press/v100/krupnik20a.html.

Related Material