Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations

Yanwei Wang, Nadia Figueroa, Shen Li, Ankit Shah, Julie Shah
Proceedings of The 6th Conference on Robot Learning, PMLR 205:94-105, 2023.

Abstract

Learning from demonstration (LfD) has successfully solved tasks featuring a long time horizon. However, when the problem complexity also includes human-in-the-loop perturbations, state-of-the-art approaches do not guarantee the successful reproduction of a task. In this work, we identify the roots of this challenge as the failure of a learned continuous policy to satisfy the discrete plan implicit in the demonstration. By utilizing modes (rather than subgoals) as the discrete abstraction and motion policies with both mode invariance and goal reachability properties, we prove our learned continuous policy can simulate any discrete plan specified by a linear temporal logic (LTL) formula. Consequently, an imitator is robust to both task- and motion-level perturbations and guaranteed to achieve task success.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-wang23a, title = {Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations}, author = {Wang, Yanwei and Figueroa, Nadia and Li, Shen and Shah, Ankit and Shah, Julie}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {94--105}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/wang23a/wang23a.pdf}, url = {https://proceedings.mlr.press/v205/wang23a.html}, abstract = {Learning from demonstration (LfD) has successfully solved tasks featuring a long time horizon. However, when the problem complexity also includes human-in-the-loop perturbations, state-of-the-art approaches do not guarantee the successful reproduction of a task. In this work, we identify the roots of this challenge as the failure of a learned continuous policy to satisfy the discrete plan implicit in the demonstration. By utilizing modes (rather than subgoals) as the discrete abstraction and motion policies with both mode invariance and goal reachability properties, we prove our learned continuous policy can simulate any discrete plan specified by a linear temporal logic (LTL) formula. Consequently, an imitator is robust to both task- and motion-level perturbations and guaranteed to achieve task success.} }
Endnote
%0 Conference Paper %T Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations %A Yanwei Wang %A Nadia Figueroa %A Shen Li %A Ankit Shah %A Julie Shah %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-wang23a %I PMLR %P 94--105 %U https://proceedings.mlr.press/v205/wang23a.html %V 205 %X Learning from demonstration (LfD) has successfully solved tasks featuring a long time horizon. However, when the problem complexity also includes human-in-the-loop perturbations, state-of-the-art approaches do not guarantee the successful reproduction of a task. In this work, we identify the roots of this challenge as the failure of a learned continuous policy to satisfy the discrete plan implicit in the demonstration. By utilizing modes (rather than subgoals) as the discrete abstraction and motion policies with both mode invariance and goal reachability properties, we prove our learned continuous policy can simulate any discrete plan specified by a linear temporal logic (LTL) formula. Consequently, an imitator is robust to both task- and motion-level perturbations and guaranteed to achieve task success.
APA
Wang, Y., Figueroa, N., Li, S., Shah, A. & Shah, J.. (2023). Temporal Logic Imitation: Learning Plan-Satisficing Motion Policies from Demonstrations. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:94-105 Available from https://proceedings.mlr.press/v205/wang23a.html.

Related Material