Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild

Kyle Stachowicz, Lydia Ignatova, Sergey Levine
Proceedings of The 8th Conference on Robot Learning, PMLR 270:1035-1047, 2025.

Abstract

Recent works have proposed a number of general-purpose robotic foundation models that can control a variety of robotic platforms to perform a range of different tasks, including in the domains of navigation and manipulation. However, such models are typically trained via imitation learning, which precludes the ability to improve autonomously through experience that the robot gathers on the job. In this work, our aim is to train general-purpose robotic foundation models in the domain of robotic navigation specifically with the aim of enabling autonomous self-improvement. We show that a combination of pretraining with offline reinforcement learning and a complete system for continual autonomous operation leads to a robotic learning framework that not only starts off with broad and diverse capabilities, but can further improve and adapt those capabilities in the course of carrying out navigational tasks in a given deployment location. To our knowledge, our model LiReN is the first navigation robot foundation model that is capable of fine-tuning with autonomous online data in open-world settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-stachowicz25a, title = {Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild}, author = {Stachowicz, Kyle and Ignatova, Lydia and Levine, Sergey}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {1035--1047}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/stachowicz25a/stachowicz25a.pdf}, url = {https://proceedings.mlr.press/v270/stachowicz25a.html}, abstract = {Recent works have proposed a number of general-purpose robotic foundation models that can control a variety of robotic platforms to perform a range of different tasks, including in the domains of navigation and manipulation. However, such models are typically trained via imitation learning, which precludes the ability to improve autonomously through experience that the robot gathers on the job. In this work, our aim is to train general-purpose robotic foundation models in the domain of robotic navigation specifically with the aim of enabling autonomous self-improvement. We show that a combination of pretraining with offline reinforcement learning and a complete system for continual autonomous operation leads to a robotic learning framework that not only starts off with broad and diverse capabilities, but can further improve and adapt those capabilities in the course of carrying out navigational tasks in a given deployment location. To our knowledge, our model LiReN is the first navigation robot foundation model that is capable of fine-tuning with autonomous online data in open-world settings.} }
Endnote
%0 Conference Paper %T Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild %A Kyle Stachowicz %A Lydia Ignatova %A Sergey Levine %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-stachowicz25a %I PMLR %P 1035--1047 %U https://proceedings.mlr.press/v270/stachowicz25a.html %V 270 %X Recent works have proposed a number of general-purpose robotic foundation models that can control a variety of robotic platforms to perform a range of different tasks, including in the domains of navigation and manipulation. However, such models are typically trained via imitation learning, which precludes the ability to improve autonomously through experience that the robot gathers on the job. In this work, our aim is to train general-purpose robotic foundation models in the domain of robotic navigation specifically with the aim of enabling autonomous self-improvement. We show that a combination of pretraining with offline reinforcement learning and a complete system for continual autonomous operation leads to a robotic learning framework that not only starts off with broad and diverse capabilities, but can further improve and adapt those capabilities in the course of carrying out navigational tasks in a given deployment location. To our knowledge, our model LiReN is the first navigation robot foundation model that is capable of fine-tuning with autonomous online data in open-world settings.
APA
Stachowicz, K., Ignatova, L. & Levine, S.. (2025). Lifelong Autonomous Improvement of Navigation Foundation Models in the Wild. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:1035-1047 Available from https://proceedings.mlr.press/v270/stachowicz25a.html.

Related Material