[edit]
DART: Noise Injection for Robust Imitation Learning
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:143-156, 2017.
Abstract
One approach to Imitation Learning is Behavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this “off-policy" approach is that the robot’s errors compound when drifting away from the supervisor’s demonstrations. On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be difficult for human supervisors, add significant computation burden, and require the robot to visit potentially dangerous states during training. We propose an off-policy approach that \emphinjects noise into the supervisor’s policy while demonstrating. This forces the supervisor and robot to explore and recover from errors without letting them compound. We propose a new algorithm, DART, that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot’s trained policy during data collection. We provide a theoretical analysis to illustrate that DART reduces covariate shift more than Behavior Cloning for a robot with non-zero error. We evaluate DART in two domains: in simulation with an algorithmic supervisor on the MuJoCo locomotive tasks and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter. For challenging tasks like Humanoid, DART can be up to $280%$ faster in computation time and only decreases the supervisor’s cumulative reward by $5%$ during training, whereas DAgger executes policies that have $80%$ less cumulative reward than the supervisor. On the grasping in clutter task, DART obtains on average $62%$ performance increase over Behavior Cloning.