[edit]
Truly Batch Model-Free Inverse Reinforcement Learning about Multiple Intentions
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2359-2369, 2020.
Abstract
We consider Inverse Reinforcement Learning (IRL) about multiple intentions, \ie the problem of estimating the unknown reward functions optimized by a group of experts that demonstrate optimal behaviors. Most of the existing algorithms either require access to a model of the environment or need to repeatedly compute the optimal policies for the hypothesized rewards. However, these requirements are rarely met in real-world applications, in which interacting with the environment can be expensive or even dangerous. In this paper, we address the IRL about multiple intentions in a fully model-free and batch setting. We first cast the single IRL problem as a constrained likelihood maximization and then we use this formulation to cluster agents based on the likelihood of the assignment. In this way, we can efficiently solve, without interactions with the environment, both the IRL and the clustering problem. Finally, we evaluate the proposed methodology on simulated domains and on a real-world social-network application.