L*LM: Learning Automata from Demonstrations, Examples, and Natural Language

Marcell Vazquez-Chanlatte, Karim Elmaaroufi, Stefan Witwicki, Matei Zaharia, Sanjit A. Seshia
Proceedings of the International Conference on Neuro-symbolic Systems, PMLR 288:543-569, 2025.

Abstract

Expert demonstrations have proven to be an easy way to indirectly specify complex tasks. Recent algorithms even support extracting unambiguous formal specifications, e.g. deterministic finite automata (DFA), from demonstrations. Unfortunately, these techniques are typically not sample-efficient. In this work, we introduce L*LM, an algorithm for learning DFAs from both demonstrations and natural language. Due to the expressivity of natural language, we observe a significant improvement in the data efficiency of learning DFAs from expert demonstrations. Technically, L*LM leverages large language models to answer membership queries about the underlying task. This is then combined with recent techniques for transforming learning from demonstrations into a sequence of labeled example learning problems. In our experiments, we observe the two modalities complement each other, yielding a powerful few-shot learner.

Cite this Paper


BibTeX
@InProceedings{pmlr-v288-vazquez-chanlatte25a, title = {L*LM: Learning Automata from Demonstrations, Examples, and Natural Language}, author = {Vazquez-Chanlatte, Marcell and Elmaaroufi, Karim and Witwicki, Stefan and Zaharia, Matei and Seshia, Sanjit A.}, booktitle = {Proceedings of the International Conference on Neuro-symbolic Systems}, pages = {543--569}, year = {2025}, editor = {Pappas, George and Ravikumar, Pradeep and Seshia, Sanjit A.}, volume = {288}, series = {Proceedings of Machine Learning Research}, month = {28--30 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v288/main/assets/vazquez-chanlatte25a/vazquez-chanlatte25a.pdf}, url = {https://proceedings.mlr.press/v288/vazquez-chanlatte25a.html}, abstract = {Expert demonstrations have proven to be an easy way to indirectly specify complex tasks. Recent algorithms even support extracting unambiguous formal specifications, e.g. deterministic finite automata (DFA), from demonstrations. Unfortunately, these techniques are typically not sample-efficient. In this work, we introduce L*LM, an algorithm for learning DFAs from both demonstrations and natural language. Due to the expressivity of natural language, we observe a significant improvement in the data efficiency of learning DFAs from expert demonstrations. Technically, L*LM leverages large language models to answer membership queries about the underlying task. This is then combined with recent techniques for transforming learning from demonstrations into a sequence of labeled example learning problems. In our experiments, we observe the two modalities complement each other, yielding a powerful few-shot learner.} }
Endnote
%0 Conference Paper %T L*LM: Learning Automata from Demonstrations, Examples, and Natural Language %A Marcell Vazquez-Chanlatte %A Karim Elmaaroufi %A Stefan Witwicki %A Matei Zaharia %A Sanjit A. Seshia %B Proceedings of the International Conference on Neuro-symbolic Systems %C Proceedings of Machine Learning Research %D 2025 %E George Pappas %E Pradeep Ravikumar %E Sanjit A. Seshia %F pmlr-v288-vazquez-chanlatte25a %I PMLR %P 543--569 %U https://proceedings.mlr.press/v288/vazquez-chanlatte25a.html %V 288 %X Expert demonstrations have proven to be an easy way to indirectly specify complex tasks. Recent algorithms even support extracting unambiguous formal specifications, e.g. deterministic finite automata (DFA), from demonstrations. Unfortunately, these techniques are typically not sample-efficient. In this work, we introduce L*LM, an algorithm for learning DFAs from both demonstrations and natural language. Due to the expressivity of natural language, we observe a significant improvement in the data efficiency of learning DFAs from expert demonstrations. Technically, L*LM leverages large language models to answer membership queries about the underlying task. This is then combined with recent techniques for transforming learning from demonstrations into a sequence of labeled example learning problems. In our experiments, we observe the two modalities complement each other, yielding a powerful few-shot learner.
APA
Vazquez-Chanlatte, M., Elmaaroufi, K., Witwicki, S., Zaharia, M. & Seshia, S.A.. (2025). L*LM: Learning Automata from Demonstrations, Examples, and Natural Language. Proceedings of the International Conference on Neuro-symbolic Systems, in Proceedings of Machine Learning Research 288:543-569 Available from https://proceedings.mlr.press/v288/vazquez-chanlatte25a.html.

Related Material