RiskSensitive Generative Adversarial Imitation Learning
[edit]
Proceedings of Machine Learning Research, PMLR 89:21542163, 2019.
Abstract
We study risksensitive imitation learning where the agent’s goal is to perform at least as well as the expert in terms of a risk profile. We first formulate our risksensitive imitation learning setting. We consider the generative adversarial approach to imitation learning (GAIL) and derive an optimization problem for our formulation, which we call it risk sensitive GAIL (RSGAIL). We then derive two different versions of our RSGAIL optimization problem that aim at matching the risk profiles of the agent and the expert w.r.t. JensenShannon (JS) divergence and Wasserstein distance, and develop risksensitive generative adversarial imitation learning algorithms based on these optimization problems. We evaluate the performance of our algorithms and compare them with GAIL and the riskaverse imitation learning (RAIL) algorithms in two MuJoCo and two OpenAI classical control tasks.
Related Material


