Few-Shot Neuro-Symbolic Imitation Learning for Long-Horizon Planning and Acting

Pierrick Lorang, Hong Lu, Johannes Huemer, Patrik Zips, Matthias Scheutz
Proceedings of The 9th Conference on Robot Learning, PMLR 305:2501-2518, 2025.

Abstract

Imitation learning enables intelligent systems to acquire complex behaviors with minimal supervision. However, existing methods often focus on short-horizon skills, require large datasets, and struggle to solve long-horizon tasks or generalize across task variations and distribution shifts. We propose a novel neuro-symbolic framework that jointly learns continuous control policies and symbolic domain abstractions from a few skill demonstrations. Our method abstracts high-level task structures into a graph, discovers symbolic rules via an Answer Set Programming solver, and trains low-level controllers using diffusion policy imitation learning. A high-level oracle filters task-relevant information to focus each controller on a minimal observation and action space. Our graph-based neuro-symbolic framework enables capturing complex state transitions, including non-spatial and temporal relations, that data-driven learning or clustering techniques often fail to discover in limited demonstration datasets. We validate our approach in six domains that involve four robotic arms, Stacking, Kitchen, Assembly, and Towers of Hanoi environments, and a distinct Automated Forklift domain with two environments. The results demonstrate high data efficiency with as few as five skill demonstrations, strong zero- and few-shot generalizations, and interpretable decision making. Our code is publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-lorang25a, title = {Few-Shot Neuro-Symbolic Imitation Learning for Long-Horizon Planning and Acting}, author = {Lorang, Pierrick and Lu, Hong and Huemer, Johannes and Zips, Patrik and Scheutz, Matthias}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {2501--2518}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/lorang25a/lorang25a.pdf}, url = {https://proceedings.mlr.press/v305/lorang25a.html}, abstract = {Imitation learning enables intelligent systems to acquire complex behaviors with minimal supervision. However, existing methods often focus on short-horizon skills, require large datasets, and struggle to solve long-horizon tasks or generalize across task variations and distribution shifts. We propose a novel neuro-symbolic framework that jointly learns continuous control policies and symbolic domain abstractions from a few skill demonstrations. Our method abstracts high-level task structures into a graph, discovers symbolic rules via an Answer Set Programming solver, and trains low-level controllers using diffusion policy imitation learning. A high-level oracle filters task-relevant information to focus each controller on a minimal observation and action space. Our graph-based neuro-symbolic framework enables capturing complex state transitions, including non-spatial and temporal relations, that data-driven learning or clustering techniques often fail to discover in limited demonstration datasets. We validate our approach in six domains that involve four robotic arms, Stacking, Kitchen, Assembly, and Towers of Hanoi environments, and a distinct Automated Forklift domain with two environments. The results demonstrate high data efficiency with as few as five skill demonstrations, strong zero- and few-shot generalizations, and interpretable decision making. Our code is publicly available.} }
Endnote
%0 Conference Paper %T Few-Shot Neuro-Symbolic Imitation Learning for Long-Horizon Planning and Acting %A Pierrick Lorang %A Hong Lu %A Johannes Huemer %A Patrik Zips %A Matthias Scheutz %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-lorang25a %I PMLR %P 2501--2518 %U https://proceedings.mlr.press/v305/lorang25a.html %V 305 %X Imitation learning enables intelligent systems to acquire complex behaviors with minimal supervision. However, existing methods often focus on short-horizon skills, require large datasets, and struggle to solve long-horizon tasks or generalize across task variations and distribution shifts. We propose a novel neuro-symbolic framework that jointly learns continuous control policies and symbolic domain abstractions from a few skill demonstrations. Our method abstracts high-level task structures into a graph, discovers symbolic rules via an Answer Set Programming solver, and trains low-level controllers using diffusion policy imitation learning. A high-level oracle filters task-relevant information to focus each controller on a minimal observation and action space. Our graph-based neuro-symbolic framework enables capturing complex state transitions, including non-spatial and temporal relations, that data-driven learning or clustering techniques often fail to discover in limited demonstration datasets. We validate our approach in six domains that involve four robotic arms, Stacking, Kitchen, Assembly, and Towers of Hanoi environments, and a distinct Automated Forklift domain with two environments. The results demonstrate high data efficiency with as few as five skill demonstrations, strong zero- and few-shot generalizations, and interpretable decision making. Our code is publicly available.
APA
Lorang, P., Lu, H., Huemer, J., Zips, P. & Scheutz, M.. (2025). Few-Shot Neuro-Symbolic Imitation Learning for Long-Horizon Planning and Acting. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:2501-2518 Available from https://proceedings.mlr.press/v305/lorang25a.html.

Related Material