Generalization Guarantees for Imitation Learning

Allen Ren, Sushant Veer, Anirudha Majumdar
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1426-1442, 2021.

Abstract

Control policies from imitation learning can often fail to generalize to novel environments due to imperfect demonstrations or the inability of imitation learning algorithms to accurately infer the expert’s policies. In this paper, we present rigorous generalization guarantees for imitation learning by leveraging the Probably Approximately Correct (PAC)-Bayes framework to provide upper bounds on the expected cost of policies in novel environments. We propose a two-stage training method where a latent policy distribution is first embedded with multi-modal expert behavior using a conditional variational autoencoder, and then “fine-tuned” in new training environments to explicitly optimize the generalization bound. We demonstrate strong generalization bounds and their tightness relative to empirical performance in simulation for (i) grasping diverse mugs, (ii) planar pushing with visual feedback, and (iii) vision-based indoor navigation, as well as through hardware experiments for the two manipulation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-ren21a, title = {Generalization Guarantees for Imitation Learning}, author = {Ren, Allen and Veer, Sushant and Majumdar, Anirudha}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1426--1442}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/ren21a/ren21a.pdf}, url = {https://proceedings.mlr.press/v155/ren21a.html}, abstract = {Control policies from imitation learning can often fail to generalize to novel environments due to imperfect demonstrations or the inability of imitation learning algorithms to accurately infer the expert’s policies. In this paper, we present rigorous generalization guarantees for imitation learning by leveraging the Probably Approximately Correct (PAC)-Bayes framework to provide upper bounds on the expected cost of policies in novel environments. We propose a two-stage training method where a latent policy distribution is first embedded with multi-modal expert behavior using a conditional variational autoencoder, and then “fine-tuned” in new training environments to explicitly optimize the generalization bound. We demonstrate strong generalization bounds and their tightness relative to empirical performance in simulation for (i) grasping diverse mugs, (ii) planar pushing with visual feedback, and (iii) vision-based indoor navigation, as well as through hardware experiments for the two manipulation tasks.} }
Endnote
%0 Conference Paper %T Generalization Guarantees for Imitation Learning %A Allen Ren %A Sushant Veer %A Anirudha Majumdar %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-ren21a %I PMLR %P 1426--1442 %U https://proceedings.mlr.press/v155/ren21a.html %V 155 %X Control policies from imitation learning can often fail to generalize to novel environments due to imperfect demonstrations or the inability of imitation learning algorithms to accurately infer the expert’s policies. In this paper, we present rigorous generalization guarantees for imitation learning by leveraging the Probably Approximately Correct (PAC)-Bayes framework to provide upper bounds on the expected cost of policies in novel environments. We propose a two-stage training method where a latent policy distribution is first embedded with multi-modal expert behavior using a conditional variational autoencoder, and then “fine-tuned” in new training environments to explicitly optimize the generalization bound. We demonstrate strong generalization bounds and their tightness relative to empirical performance in simulation for (i) grasping diverse mugs, (ii) planar pushing with visual feedback, and (iii) vision-based indoor navigation, as well as through hardware experiments for the two manipulation tasks.
APA
Ren, A., Veer, S. & Majumdar, A.. (2021). Generalization Guarantees for Imitation Learning. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1426-1442 Available from https://proceedings.mlr.press/v155/ren21a.html.

Related Material