[edit]
Multi-Agent Reinforcement Learning with Multi-Step Generative Models
Proceedings of the Conference on Robot Learning, PMLR 100:776-790, 2020.
Abstract
We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems – an important domain for robots inter-acting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.