DART: Noise Injection for Robust Imitation Learning

Michael Laskey, Jonathan Lee, Roy Fox, Anca Dragan, Ken Goldberg
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:143-156, 2017.

Abstract

One approach to Imitation Learning is Behavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this “off-policy" approach is that the robot’s errors compound when drifting away from the supervisor’s demonstrations. On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be difficult for human supervisors, add significant computation burden, and require the robot to visit potentially dangerous states during training. We propose an off-policy approach that \emphinjects noise into the supervisor’s policy while demonstrating. This forces the supervisor and robot to explore and recover from errors without letting them compound. We propose a new algorithm, DART, that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot’s trained policy during data collection. We provide a theoretical analysis to illustrate that DART reduces covariate shift more than Behavior Cloning for a robot with non-zero error. We evaluate DART in two domains: in simulation with an algorithmic supervisor on the MuJoCo locomotive tasks and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter. For challenging tasks like Humanoid, DART can be up to $280%$ faster in computation time and only decreases the supervisor’s cumulative reward by $5%$ during training, whereas DAgger executes policies that have $80%$ less cumulative reward than the supervisor. On the grasping in clutter task, DART obtains on average $62%$ performance increase over Behavior Cloning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-laskey17a, title = {DART: Noise Injection for Robust Imitation Learning}, author = {Laskey, Michael and Lee, Jonathan and Fox, Roy and Dragan, Anca and Goldberg, Ken}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {143--156}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/laskey17a/laskey17a.pdf}, url = {https://proceedings.mlr.press/v78/laskey17a.html}, abstract = {One approach to Imitation Learning is Behavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this “off-policy" approach is that the robot’s errors compound when drifting away from the supervisor’s demonstrations. On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be difficult for human supervisors, add significant computation burden, and require the robot to visit potentially dangerous states during training. We propose an off-policy approach that \emphinjects noise into the supervisor’s policy while demonstrating. This forces the supervisor and robot to explore and recover from errors without letting them compound. We propose a new algorithm, DART, that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot’s trained policy during data collection. We provide a theoretical analysis to illustrate that DART reduces covariate shift more than Behavior Cloning for a robot with non-zero error. We evaluate DART in two domains: in simulation with an algorithmic supervisor on the MuJoCo locomotive tasks and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter. For challenging tasks like Humanoid, DART can be up to $280%$ faster in computation time and only decreases the supervisor’s cumulative reward by $5%$ during training, whereas DAgger executes policies that have $80%$ less cumulative reward than the supervisor. On the grasping in clutter task, DART obtains on average $62%$ performance increase over Behavior Cloning.} }
Endnote
%0 Conference Paper %T DART: Noise Injection for Robust Imitation Learning %A Michael Laskey %A Jonathan Lee %A Roy Fox %A Anca Dragan %A Ken Goldberg %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-laskey17a %I PMLR %P 143--156 %U https://proceedings.mlr.press/v78/laskey17a.html %V 78 %X One approach to Imitation Learning is Behavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this “off-policy" approach is that the robot’s errors compound when drifting away from the supervisor’s demonstrations. On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be difficult for human supervisors, add significant computation burden, and require the robot to visit potentially dangerous states during training. We propose an off-policy approach that \emphinjects noise into the supervisor’s policy while demonstrating. This forces the supervisor and robot to explore and recover from errors without letting them compound. We propose a new algorithm, DART, that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot’s trained policy during data collection. We provide a theoretical analysis to illustrate that DART reduces covariate shift more than Behavior Cloning for a robot with non-zero error. We evaluate DART in two domains: in simulation with an algorithmic supervisor on the MuJoCo locomotive tasks and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter. For challenging tasks like Humanoid, DART can be up to $280%$ faster in computation time and only decreases the supervisor’s cumulative reward by $5%$ during training, whereas DAgger executes policies that have $80%$ less cumulative reward than the supervisor. On the grasping in clutter task, DART obtains on average $62%$ performance increase over Behavior Cloning.
APA
Laskey, M., Lee, J., Fox, R., Dragan, A. & Goldberg, K.. (2017). DART: Noise Injection for Robust Imitation Learning. Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:143-156 Available from https://proceedings.mlr.press/v78/laskey17a.html.

Related Material