On-Policy Robot Imitation Learning from a Converging Supervisor

Ashwin Balakrishna, Brijen Thananjeyan, Jonathan Lee, Felix Li, Arsh Zahed, Joseph E. Gonzalez, Ken Goldberg
Proceedings of the Conference on Robot Learning, PMLR 100:24-41, 2020.

Abstract

Existing on-policy imitation learning algorithms, such as DAgger, assume access to a fixed supervisor. However, there are many settings where the supervisor may evolve during policy learning, such as a human performing a novel task or an improving algorithmic controller. We formalize imitation learning from a “converging supervisor” and provide sublinear static and dynamic regret guarantees against the best policy in hindsight with labels from the converged supervisor, even when labels during learning are only from intermediate supervisors. We then show that this framework is closely connected to a class of reinforcement learning (RL) algorithms known as dual policy iteration (DPI), which alternate between training a reactive learner with imitation learning and a model-based supervisor with data from the learner. Experiments suggest that when this framework is applied with the state-of-the-art deep model-based RL algorithm PETS as an improving supervisor, it outperforms deep RL baselines on continuous control tasks and provides up to an 80-fold speedup in policy evaluation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-balakrishna20a, title = {On-Policy Robot Imitation Learning from a Converging Supervisor}, author = {Balakrishna, Ashwin and Thananjeyan, Brijen and Lee, Jonathan and Li, Felix and Zahed, Arsh and Gonzalez, Joseph E. and Goldberg, Ken}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {24--41}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/balakrishna20a/balakrishna20a.pdf}, url = {https://proceedings.mlr.press/v100/balakrishna20a.html}, abstract = {Existing on-policy imitation learning algorithms, such as DAgger, assume access to a fixed supervisor. However, there are many settings where the supervisor may evolve during policy learning, such as a human performing a novel task or an improving algorithmic controller. We formalize imitation learning from a “converging supervisor” and provide sublinear static and dynamic regret guarantees against the best policy in hindsight with labels from the converged supervisor, even when labels during learning are only from intermediate supervisors. We then show that this framework is closely connected to a class of reinforcement learning (RL) algorithms known as dual policy iteration (DPI), which alternate between training a reactive learner with imitation learning and a model-based supervisor with data from the learner. Experiments suggest that when this framework is applied with the state-of-the-art deep model-based RL algorithm PETS as an improving supervisor, it outperforms deep RL baselines on continuous control tasks and provides up to an 80-fold speedup in policy evaluation.} }
Endnote
%0 Conference Paper %T On-Policy Robot Imitation Learning from a Converging Supervisor %A Ashwin Balakrishna %A Brijen Thananjeyan %A Jonathan Lee %A Felix Li %A Arsh Zahed %A Joseph E. Gonzalez %A Ken Goldberg %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-balakrishna20a %I PMLR %P 24--41 %U https://proceedings.mlr.press/v100/balakrishna20a.html %V 100 %X Existing on-policy imitation learning algorithms, such as DAgger, assume access to a fixed supervisor. However, there are many settings where the supervisor may evolve during policy learning, such as a human performing a novel task or an improving algorithmic controller. We formalize imitation learning from a “converging supervisor” and provide sublinear static and dynamic regret guarantees against the best policy in hindsight with labels from the converged supervisor, even when labels during learning are only from intermediate supervisors. We then show that this framework is closely connected to a class of reinforcement learning (RL) algorithms known as dual policy iteration (DPI), which alternate between training a reactive learner with imitation learning and a model-based supervisor with data from the learner. Experiments suggest that when this framework is applied with the state-of-the-art deep model-based RL algorithm PETS as an improving supervisor, it outperforms deep RL baselines on continuous control tasks and provides up to an 80-fold speedup in policy evaluation.
APA
Balakrishna, A., Thananjeyan, B., Lee, J., Li, F., Zahed, A., Gonzalez, J.E. & Goldberg, K.. (2020). On-Policy Robot Imitation Learning from a Converging Supervisor. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:24-41 Available from https://proceedings.mlr.press/v100/balakrishna20a.html.

Related Material