Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models

Mike Zhang, Kaixian Qu, Vaishakh Patil, Cesar Cadena, Marco Hutter
Proceedings of The 8th Conference on Robot Learning, PMLR 270:2120-2146, 2025.

Abstract

Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have shifted from explicit maps with fixed semantic classes to implicit open vocabulary maps based on queryable embeddings capable of representing any semantic class. However, embeddings cannot directly report the scene context as they are implicit, requiring further processing for LLM integration. To address this, we propose an explicit text-based map that can represent thousands of semantic classes while easily integrating with LLMs due to their text-based nature by building upon large-scale image recognition models. We study how entities in our map can be localized and show through evaluations that our text-based map localizations perform comparably to those from open vocabulary maps while using two to four orders of magnitude less memory. Real-robot experiments demonstrate the grounding of an LLM with the text-based map to solve user tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-zhang25e, title = {Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models}, author = {Zhang, Mike and Qu, Kaixian and Patil, Vaishakh and Cadena, Cesar and Hutter, Marco}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {2120--2146}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/zhang25e/zhang25e.pdf}, url = {https://proceedings.mlr.press/v270/zhang25e.html}, abstract = {Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have shifted from explicit maps with fixed semantic classes to implicit open vocabulary maps based on queryable embeddings capable of representing any semantic class. However, embeddings cannot directly report the scene context as they are implicit, requiring further processing for LLM integration. To address this, we propose an explicit text-based map that can represent thousands of semantic classes while easily integrating with LLMs due to their text-based nature by building upon large-scale image recognition models. We study how entities in our map can be localized and show through evaluations that our text-based map localizations perform comparably to those from open vocabulary maps while using two to four orders of magnitude less memory. Real-robot experiments demonstrate the grounding of an LLM with the text-based map to solve user tasks.} }
Endnote
%0 Conference Paper %T Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models %A Mike Zhang %A Kaixian Qu %A Vaishakh Patil %A Cesar Cadena %A Marco Hutter %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-zhang25e %I PMLR %P 2120--2146 %U https://proceedings.mlr.press/v270/zhang25e.html %V 270 %X Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have shifted from explicit maps with fixed semantic classes to implicit open vocabulary maps based on queryable embeddings capable of representing any semantic class. However, embeddings cannot directly report the scene context as they are implicit, requiring further processing for LLM integration. To address this, we propose an explicit text-based map that can represent thousands of semantic classes while easily integrating with LLMs due to their text-based nature by building upon large-scale image recognition models. We study how entities in our map can be localized and show through evaluations that our text-based map localizations perform comparably to those from open vocabulary maps while using two to four orders of magnitude less memory. Real-robot experiments demonstrate the grounding of an LLM with the text-based map to solve user tasks.
APA
Zhang, M., Qu, K., Patil, V., Cadena, C. & Hutter, M.. (2025). Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:2120-2146 Available from https://proceedings.mlr.press/v270/zhang25e.html.

Related Material