Interpretable Imitation Learning via Generative Adversarial STL Inference and Control

Wenliang Liu, Danyang Li, Erfan Aasi, Daniela Rus, Roberto Tron, Calin Belta
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:472-489, 2025.

Abstract

Imitation learning methods have demonstrated considerable success in teaching autonomous systems complex tasks through expert demonstrations. However, a limitation of these methods is their lack of interpretability, particularly in understanding the specific task the learning agent aims to accomplish. In this paper, we propose a novel imitation learning method that combines Signal Temporal Logic (STL) inference and control synthesis, enabling the explicit representation of the task as an STL formula. This approach not only provides a clear understanding of the task but also supports the integration of human knowledge and allows for adaptation to out-of-distribution scenarios by manually adjusting the STL formulas and fine-tuning the policy. We employ a Generative Adversarial Network (GAN)-inspired approach to train both the inference and policy networks, effectively narrowing the gap between expert and learned policies. The efficiency of our algorithm is demonstrated through simulations, showcasing its practical applicability and adaptability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-liu25a, title = {Interpretable Imitation Learning via Generative Adversarial STL Inference and Control}, author = {Liu, Wenliang and Li, Danyang and Aasi, Erfan and Rus, Daniela and Tron, Roberto and Belta, Calin}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {472--489}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/liu25a/liu25a.pdf}, url = {https://proceedings.mlr.press/v288/liu25a.html}, abstract = {Imitation learning methods have demonstrated considerable success in teaching autonomous systems complex tasks through expert demonstrations. However, a limitation of these methods is their lack of interpretability, particularly in understanding the specific task the learning agent aims to accomplish. In this paper, we propose a novel imitation learning method that combines Signal Temporal Logic (STL) inference and control synthesis, enabling the explicit representation of the task as an STL formula. This approach not only provides a clear understanding of the task but also supports the integration of human knowledge and allows for adaptation to out-of-distribution scenarios by manually adjusting the STL formulas and fine-tuning the policy. We employ a Generative Adversarial Network (GAN)-inspired approach to train both the inference and policy networks, effectively narrowing the gap between expert and learned policies. The efficiency of our algorithm is demonstrated through simulations, showcasing its practical applicability and adaptability.} }
Endnote
%0 Conference Paper %T Interpretable Imitation Learning via Generative Adversarial STL Inference and Control %A Wenliang Liu %A Danyang Li %A Erfan Aasi %A Daniela Rus %A Roberto Tron %A Calin Belta %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-liu25a %I PMLR %P 472--489 %U https://proceedings.mlr.press/v288/liu25a.html %V 288 %X Imitation learning methods have demonstrated considerable success in teaching autonomous systems complex tasks through expert demonstrations. However, a limitation of these methods is their lack of interpretability, particularly in understanding the specific task the learning agent aims to accomplish. In this paper, we propose a novel imitation learning method that combines Signal Temporal Logic (STL) inference and control synthesis, enabling the explicit representation of the task as an STL formula. This approach not only provides a clear understanding of the task but also supports the integration of human knowledge and allows for adaptation to out-of-distribution scenarios by manually adjusting the STL formulas and fine-tuning the policy. We employ a Generative Adversarial Network (GAN)-inspired approach to train both the inference and policy networks, effectively narrowing the gap between expert and learned policies. The efficiency of our algorithm is demonstrated through simulations, showcasing its practical applicability and adaptability.
APA
Liu, W., Li, D., Aasi, E., Rus, D., Tron, R. & Belta, C.. (2025). Interpretable Imitation Learning via Generative Adversarial STL Inference and Control. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:472-489 Available from https://proceedings.mlr.press/v288/liu25a.html.

Related Material