LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning

Dantong Niu, Yuvan Sharma, Giscard Biamby, Jerome Quenum, Yutong Bai, Baifeng Shi, Trevor Darrell, Roei Herzig
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3333-3355, 2025.

Abstract

In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics applications have been extensively trained on language and action data, but their ability to generalize in different settings has often been less than desired. To address this, we introduce LLARVA, a model trained with a novel instruction tuning method that leverages structured prompts to unify a range of robotic learning tasks, scenarios, and environments. Additionally, we show that predicting intermediate 2-D representations, which we refer to as *visual traces*, can help further align vision and action spaces for robot learning. We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model, and we evaluate on 12 different tasks in the RLBench simulator as well as a physical Franka Emika Panda 7-DoF robot. Our experiments yield strong performance, demonstrating that LLARVA — using 2-D and language representations — performs well compared to several contemporary baselines, and can generalize across various robot environments and configurations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-niu25a, title = {LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning}, author = {Niu, Dantong and Sharma, Yuvan and Biamby, Giscard and Quenum, Jerome and Bai, Yutong and Shi, Baifeng and Darrell, Trevor and Herzig, Roei}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3333--3355}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/niu25a/niu25a.pdf}, url = {https://proceedings.mlr.press/v270/niu25a.html}, abstract = {In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics applications have been extensively trained on language and action data, but their ability to generalize in different settings has often been less than desired. To address this, we introduce LLARVA, a model trained with a novel instruction tuning method that leverages structured prompts to unify a range of robotic learning tasks, scenarios, and environments. Additionally, we show that predicting intermediate 2-D representations, which we refer to as *visual traces*, can help further align vision and action spaces for robot learning. We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model, and we evaluate on 12 different tasks in the RLBench simulator as well as a physical Franka Emika Panda 7-DoF robot. Our experiments yield strong performance, demonstrating that LLARVA — using 2-D and language representations — performs well compared to several contemporary baselines, and can generalize across various robot environments and configurations.} }
Endnote
%0 Conference Paper %T LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning %A Dantong Niu %A Yuvan Sharma %A Giscard Biamby %A Jerome Quenum %A Yutong Bai %A Baifeng Shi %A Trevor Darrell %A Roei Herzig %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-niu25a %I PMLR %P 3333--3355 %U https://proceedings.mlr.press/v270/niu25a.html %V 270 %X In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics applications have been extensively trained on language and action data, but their ability to generalize in different settings has often been less than desired. To address this, we introduce LLARVA, a model trained with a novel instruction tuning method that leverages structured prompts to unify a range of robotic learning tasks, scenarios, and environments. Additionally, we show that predicting intermediate 2-D representations, which we refer to as *visual traces*, can help further align vision and action spaces for robot learning. We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model, and we evaluate on 12 different tasks in the RLBench simulator as well as a physical Franka Emika Panda 7-DoF robot. Our experiments yield strong performance, demonstrating that LLARVA — using 2-D and language representations — performs well compared to several contemporary baselines, and can generalize across various robot environments and configurations.
APA
Niu, D., Sharma, Y., Biamby, G., Quenum, J., Bai, Y., Shi, B., Darrell, T. & Herzig, R.. (2025). LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3333-3355 Available from https://proceedings.mlr.press/v270/niu25a.html.

Related Material