Learning from Demonstrations using Signal Temporal Logic

Aniruddh Puranic, Jyotirmoy Deshmukh, Stefanos Nikolaidis
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:2228-2242, 2021.

Abstract

Learning-from-demonstrations is an emerging paradigm to obtain effective robot control policies for complex tasks via reinforcement learning without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and interpretability in the learned control policies. To address these issues, we use Signal Temporal Logic to evaluate and rank the quality of demonstrations. Temporal logic-based specifications allow us to create non-Markovian rewards, and also define interesting causal dependencies between tasks such as sequential task specifications. We validate our approach through experiments on discrete-world and OpenAI Gym environments, and show that our approach outperforms the state-of-the-art Maximum Causal Entropy Inverse Reinforcement Learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-puranic21a, title = {Learning from Demonstrations using Signal Temporal Logic}, author = {Puranic, Aniruddh and Deshmukh, Jyotirmoy and Nikolaidis, Stefanos}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {2228--2242}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/puranic21a/puranic21a.pdf}, url = {https://proceedings.mlr.press/v155/puranic21a.html}, abstract = {Learning-from-demonstrations is an emerging paradigm to obtain effective robot control policies for complex tasks via reinforcement learning without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and interpretability in the learned control policies. To address these issues, we use Signal Temporal Logic to evaluate and rank the quality of demonstrations. Temporal logic-based specifications allow us to create non-Markovian rewards, and also define interesting causal dependencies between tasks such as sequential task specifications. We validate our approach through experiments on discrete-world and OpenAI Gym environments, and show that our approach outperforms the state-of-the-art Maximum Causal Entropy Inverse Reinforcement Learning.} }
Endnote
%0 Conference Paper %T Learning from Demonstrations using Signal Temporal Logic %A Aniruddh Puranic %A Jyotirmoy Deshmukh %A Stefanos Nikolaidis %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-puranic21a %I PMLR %P 2228--2242 %U https://proceedings.mlr.press/v155/puranic21a.html %V 155 %X Learning-from-demonstrations is an emerging paradigm to obtain effective robot control policies for complex tasks via reinforcement learning without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and interpretability in the learned control policies. To address these issues, we use Signal Temporal Logic to evaluate and rank the quality of demonstrations. Temporal logic-based specifications allow us to create non-Markovian rewards, and also define interesting causal dependencies between tasks such as sequential task specifications. We validate our approach through experiments on discrete-world and OpenAI Gym environments, and show that our approach outperforms the state-of-the-art Maximum Causal Entropy Inverse Reinforcement Learning.
APA
Puranic, A., Deshmukh, J. & Nikolaidis, S.. (2021). Learning from Demonstrations using Signal Temporal Logic. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:2228-2242 Available from https://proceedings.mlr.press/v155/puranic21a.html.

Related Material