From Real World to Logic and Back: Learning Generalizable Relational Concepts For Long Horizon Robot Planning

Naman Shah, Jayesh Nagpal, Siddharth Srivastava
Proceedings of The 9th Conference on Robot Learning, PMLR 305:5362-5434, 2025.

Abstract

Humans efficiently generalize from limited demonstrations, but robots still struggle to transfer learned knowledge to complex, unseen tasks with longer horizons and increased complexity. We propose the first known method enabling robots to autonomously invent relational concepts directly from small sets of unannotated, unsegmented demonstrations. The learned symbolic concepts are grounded into logic-based world models, facilitating efficient zero-shot generalization to significantly more complex tasks. Empirical results demonstrate that our approach achieves performance comparable to hand-crafted models, successfully scaling execution horizons and handling up to 18 times more objects than seen in training, providing the first autonomous framework for learning transferable symbolic abstractions from raw robot trajectories.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-shah25a, title = {From Real World to Logic and Back: Learning Generalizable Relational Concepts For Long Horizon Robot Planning}, author = {Shah, Naman and Nagpal, Jayesh and Srivastava, Siddharth}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {5362--5434}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/shah25a/shah25a.pdf}, url = {https://proceedings.mlr.press/v305/shah25a.html}, abstract = {Humans efficiently generalize from limited demonstrations, but robots still struggle to transfer learned knowledge to complex, unseen tasks with longer horizons and increased complexity. We propose the first known method enabling robots to autonomously invent relational concepts directly from small sets of unannotated, unsegmented demonstrations. The learned symbolic concepts are grounded into logic-based world models, facilitating efficient zero-shot generalization to significantly more complex tasks. Empirical results demonstrate that our approach achieves performance comparable to hand-crafted models, successfully scaling execution horizons and handling up to 18 times more objects than seen in training, providing the first autonomous framework for learning transferable symbolic abstractions from raw robot trajectories.} }
Endnote
%0 Conference Paper %T From Real World to Logic and Back: Learning Generalizable Relational Concepts For Long Horizon Robot Planning %A Naman Shah %A Jayesh Nagpal %A Siddharth Srivastava %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-shah25a %I PMLR %P 5362--5434 %U https://proceedings.mlr.press/v305/shah25a.html %V 305 %X Humans efficiently generalize from limited demonstrations, but robots still struggle to transfer learned knowledge to complex, unseen tasks with longer horizons and increased complexity. We propose the first known method enabling robots to autonomously invent relational concepts directly from small sets of unannotated, unsegmented demonstrations. The learned symbolic concepts are grounded into logic-based world models, facilitating efficient zero-shot generalization to significantly more complex tasks. Empirical results demonstrate that our approach achieves performance comparable to hand-crafted models, successfully scaling execution horizons and handling up to 18 times more objects than seen in training, providing the first autonomous framework for learning transferable symbolic abstractions from raw robot trajectories.
APA
Shah, N., Nagpal, J. & Srivastava, S.. (2025). From Real World to Logic and Back: Learning Generalizable Relational Concepts For Long Horizon Robot Planning. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:5362-5434 Available from https://proceedings.mlr.press/v305/shah25a.html.

Related Material