Distilling On-device Language Models for Robot Planning with Minimal Human Intervention

Zachary Ravichandran, Ignacio Hounie, Fernando Cladera, Alejandro Ribeiro, George J. Pappas, Vijay Kumar
Proceedings of The 9th Conference on Robot Learning, PMLR 305:4859-4884, 2025.

Abstract

Large language models (LLMs) provide robots with powerful contextual reasoning abilities and a natural human interface. Yet, current LLM-enabled robots typically depend on cloud-hosted models, limiting their usability in environments with unreliable communication infrastructure, such as outdoor or industrial settings. We present PRISM, a framework for distilling small language model (SLM)-enabled robot planners that run on-device with minimal human supervision. Starting from an existing LLM-enabled planner, PRISM automatically synthesizes diverse tasks and environments, elicits plans from the LLM, and uses this synthetic dataset to distill a compact SLM as a drop-in replacement of the source model. We apply PRISM to three LLM-enabled planners for mapping and exploration, manipulation, and household assistance, and we demonstrate that PRISM improves the performance of Llama-3.2-3B from 10-20% of GPT-4o’s performance to over 93% - using only synthetic data. We further demonstrate that the distilled planners generalize across heterogeneous robotic platforms (ground and aerial) and diverse environments (indoor and outdoor). We release all software, trained models, and datasets to promote reproducibility and follow-up work.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-ravichandran25a, title = {Distilling On-device Language Models for Robot Planning with Minimal Human Intervention}, author = {Ravichandran, Zachary and Hounie, Ignacio and Cladera, Fernando and Ribeiro, Alejandro and Pappas, George J. and Kumar, Vijay}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {4859--4884}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/ravichandran25a/ravichandran25a.pdf}, url = {https://proceedings.mlr.press/v305/ravichandran25a.html}, abstract = {Large language models (LLMs) provide robots with powerful contextual reasoning abilities and a natural human interface. Yet, current LLM-enabled robots typically depend on cloud-hosted models, limiting their usability in environments with unreliable communication infrastructure, such as outdoor or industrial settings. We present PRISM, a framework for distilling small language model (SLM)-enabled robot planners that run on-device with minimal human supervision. Starting from an existing LLM-enabled planner, PRISM automatically synthesizes diverse tasks and environments, elicits plans from the LLM, and uses this synthetic dataset to distill a compact SLM as a drop-in replacement of the source model. We apply PRISM to three LLM-enabled planners for mapping and exploration, manipulation, and household assistance, and we demonstrate that PRISM improves the performance of Llama-3.2-3B from 10-20% of GPT-4o’s performance to over 93% - using only synthetic data. We further demonstrate that the distilled planners generalize across heterogeneous robotic platforms (ground and aerial) and diverse environments (indoor and outdoor). We release all software, trained models, and datasets to promote reproducibility and follow-up work.} }
Endnote
%0 Conference Paper %T Distilling On-device Language Models for Robot Planning with Minimal Human Intervention %A Zachary Ravichandran %A Ignacio Hounie %A Fernando Cladera %A Alejandro Ribeiro %A George J. Pappas %A Vijay Kumar %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-ravichandran25a %I PMLR %P 4859--4884 %U https://proceedings.mlr.press/v305/ravichandran25a.html %V 305 %X Large language models (LLMs) provide robots with powerful contextual reasoning abilities and a natural human interface. Yet, current LLM-enabled robots typically depend on cloud-hosted models, limiting their usability in environments with unreliable communication infrastructure, such as outdoor or industrial settings. We present PRISM, a framework for distilling small language model (SLM)-enabled robot planners that run on-device with minimal human supervision. Starting from an existing LLM-enabled planner, PRISM automatically synthesizes diverse tasks and environments, elicits plans from the LLM, and uses this synthetic dataset to distill a compact SLM as a drop-in replacement of the source model. We apply PRISM to three LLM-enabled planners for mapping and exploration, manipulation, and household assistance, and we demonstrate that PRISM improves the performance of Llama-3.2-3B from 10-20% of GPT-4o’s performance to over 93% - using only synthetic data. We further demonstrate that the distilled planners generalize across heterogeneous robotic platforms (ground and aerial) and diverse environments (indoor and outdoor). We release all software, trained models, and datasets to promote reproducibility and follow-up work.
APA
Ravichandran, Z., Hounie, I., Cladera, F., Ribeiro, A., Pappas, G.J. & Kumar, V.. (2025). Distilling On-device Language Models for Robot Planning with Minimal Human Intervention. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:4859-4884 Available from https://proceedings.mlr.press/v305/ravichandran25a.html.

Related Material