Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving

Thomas Tian, Boyi Li, Xinshuo Weng, Yuxiao Chen, Edward Schmerling, Yue Wang, Boris Ivanovic, Marco Pavone
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3656-3673, 2025.

Abstract

The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to rare or unseen inputs within their training distributions. To address this, we propose TOKEN, a novel Multi-Modal Large Language Model (MM-LLM) that tokenizes the world into object-level knowledge, enabling better utilization of LLM’s reasoning capabilities to enhance autonomous vehicle planning in long-tail scenarios. TOKEN effectively alleviates data scarcity and inefficient tokenization by producing condensed and semantically enriched representations of the scene. Our results demonstrate that TOKEN excels in grounding, reasoning, and planning capabilities, outperforming existing frameworks with a 27% reduction in trajectory L2 error and a 39% decrease in collision rates in long-tail scenarios. Additionally, our work highlights the importance of representation alignment and structured reasoning in sparking the common-sense reasoning capabilities of MM-LLMs for effective planning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-tian25b, title = {Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving}, author = {Tian, Thomas and Li, Boyi and Weng, Xinshuo and Chen, Yuxiao and Schmerling, Edward and Wang, Yue and Ivanovic, Boris and Pavone, Marco}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3656--3673}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/tian25b/tian25b.pdf}, url = {https://proceedings.mlr.press/v270/tian25b.html}, abstract = {The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to rare or unseen inputs within their training distributions. To address this, we propose TOKEN, a novel Multi-Modal Large Language Model (MM-LLM) that tokenizes the world into object-level knowledge, enabling better utilization of LLM’s reasoning capabilities to enhance autonomous vehicle planning in long-tail scenarios. TOKEN effectively alleviates data scarcity and inefficient tokenization by producing condensed and semantically enriched representations of the scene. Our results demonstrate that TOKEN excels in grounding, reasoning, and planning capabilities, outperforming existing frameworks with a 27% reduction in trajectory L2 error and a 39% decrease in collision rates in long-tail scenarios. Additionally, our work highlights the importance of representation alignment and structured reasoning in sparking the common-sense reasoning capabilities of MM-LLMs for effective planning.} }
Endnote
%0 Conference Paper %T Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving %A Thomas Tian %A Boyi Li %A Xinshuo Weng %A Yuxiao Chen %A Edward Schmerling %A Yue Wang %A Boris Ivanovic %A Marco Pavone %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-tian25b %I PMLR %P 3656--3673 %U https://proceedings.mlr.press/v270/tian25b.html %V 270 %X The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to rare or unseen inputs within their training distributions. To address this, we propose TOKEN, a novel Multi-Modal Large Language Model (MM-LLM) that tokenizes the world into object-level knowledge, enabling better utilization of LLM’s reasoning capabilities to enhance autonomous vehicle planning in long-tail scenarios. TOKEN effectively alleviates data scarcity and inefficient tokenization by producing condensed and semantically enriched representations of the scene. Our results demonstrate that TOKEN excels in grounding, reasoning, and planning capabilities, outperforming existing frameworks with a 27% reduction in trajectory L2 error and a 39% decrease in collision rates in long-tail scenarios. Additionally, our work highlights the importance of representation alignment and structured reasoning in sparking the common-sense reasoning capabilities of MM-LLMs for effective planning.
APA
Tian, T., Li, B., Weng, X., Chen, Y., Schmerling, E., Wang, Y., Ivanovic, B. & Pavone, M.. (2025). Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3656-3673 Available from https://proceedings.mlr.press/v270/tian25b.html.

Related Material