Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation

Harel Biggie, Ajay Narasimha Mopidevi, Dusty Woods, Chris Heckman
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1640-1666, 2023.

Abstract

Humans have the remarkable ability to navigate through unfamiliar environments by solely relying on our prior knowledge and descriptions of the environment. For robots to perform the same type of navigation, they need to be able to associate natural language descriptions with their associated physical environment with a limited amount of prior knowledge. Recently, Large Language Models (LLMs) have been able to reason over billions of parameters and utilize them in multi-modal chat-based natural language responses. However, LLMs lack real-world awareness and their outputs are not always predictable. In this work, we develop a low-bandwidth framework that solves this lack of real-world generalization by creating an intermediate layer between an LLM and a robot navigation framework in the form of Python code. Our intermediate shoehorns the vast prior knowledge inherent in an LLM model into a series of input and output API instructions that a mobile robot can understand. We evaluate our method across four different environments and command classes on a mobile robot and highlight our framework’s ability to interpret contextual commands.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-biggie23a, title = {Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation}, author = {Biggie, Harel and Mopidevi, Ajay Narasimha and Woods, Dusty and Heckman, Chris}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1640--1666}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/biggie23a/biggie23a.pdf}, url = {https://proceedings.mlr.press/v229/biggie23a.html}, abstract = {Humans have the remarkable ability to navigate through unfamiliar environments by solely relying on our prior knowledge and descriptions of the environment. For robots to perform the same type of navigation, they need to be able to associate natural language descriptions with their associated physical environment with a limited amount of prior knowledge. Recently, Large Language Models (LLMs) have been able to reason over billions of parameters and utilize them in multi-modal chat-based natural language responses. However, LLMs lack real-world awareness and their outputs are not always predictable. In this work, we develop a low-bandwidth framework that solves this lack of real-world generalization by creating an intermediate layer between an LLM and a robot navigation framework in the form of Python code. Our intermediate shoehorns the vast prior knowledge inherent in an LLM model into a series of input and output API instructions that a mobile robot can understand. We evaluate our method across four different environments and command classes on a mobile robot and highlight our framework’s ability to interpret contextual commands.} }
Endnote
%0 Conference Paper %T Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation %A Harel Biggie %A Ajay Narasimha Mopidevi %A Dusty Woods %A Chris Heckman %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-biggie23a %I PMLR %P 1640--1666 %U https://proceedings.mlr.press/v229/biggie23a.html %V 229 %X Humans have the remarkable ability to navigate through unfamiliar environments by solely relying on our prior knowledge and descriptions of the environment. For robots to perform the same type of navigation, they need to be able to associate natural language descriptions with their associated physical environment with a limited amount of prior knowledge. Recently, Large Language Models (LLMs) have been able to reason over billions of parameters and utilize them in multi-modal chat-based natural language responses. However, LLMs lack real-world awareness and their outputs are not always predictable. In this work, we develop a low-bandwidth framework that solves this lack of real-world generalization by creating an intermediate layer between an LLM and a robot navigation framework in the form of Python code. Our intermediate shoehorns the vast prior knowledge inherent in an LLM model into a series of input and output API instructions that a mobile robot can understand. We evaluate our method across four different environments and command classes on a mobile robot and highlight our framework’s ability to interpret contextual commands.
APA
Biggie, H., Mopidevi, A.N., Woods, D. & Heckman, C.. (2023). Tell Me Where to Go: A Composable Framework for Context-Aware Embodied Robot Navigation. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1640-1666 Available from https://proceedings.mlr.press/v229/biggie23a.html.

Related Material