Using Left and Right Brains Together: Towards Vision and Language Planning

Jun Cen, Chenfei Wu, Xiao Liu, Shengming Yin, Yixuan Pei, Jinglong Yang, Qifeng Chen, Nan Duan, Jianguo Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:5982-6001, 2024.

Abstract

Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have demonstrated remarkable decision masking capabilities on a variety of tasks. However, they inherently operate planning within the language space, lacking the vision and spatial imagination ability. In contrast, humans utilize both left and right hemispheres of the brain for language and visual planning during the thinking process. Therefore, we introduce a novel vision-language planning framework in this work to perform concurrent visual and language planning for tasks with inputs of any form. Our framework incorporates visual planning to capture intricate environmental details, while language planning enhances the logical coherence of the overall system. We evaluate the effectiveness of our framework across vision-language tasks, vision-only tasks, and language-only tasks. The results demonstrate the superior performance of our approach, indicating that the integration of visual and language planning yields better contextually aware task execution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-cen24a, title = {Using Left and Right Brains Together: Towards Vision and Language Planning}, author = {Cen, Jun and Wu, Chenfei and Liu, Xiao and Yin, Shengming and Pei, Yixuan and Yang, Jinglong and Chen, Qifeng and Duan, Nan and Zhang, Jianguo}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {5982--6001}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/cen24a/cen24a.pdf}, url = {https://proceedings.mlr.press/v235/cen24a.html}, abstract = {Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have demonstrated remarkable decision masking capabilities on a variety of tasks. However, they inherently operate planning within the language space, lacking the vision and spatial imagination ability. In contrast, humans utilize both left and right hemispheres of the brain for language and visual planning during the thinking process. Therefore, we introduce a novel vision-language planning framework in this work to perform concurrent visual and language planning for tasks with inputs of any form. Our framework incorporates visual planning to capture intricate environmental details, while language planning enhances the logical coherence of the overall system. We evaluate the effectiveness of our framework across vision-language tasks, vision-only tasks, and language-only tasks. The results demonstrate the superior performance of our approach, indicating that the integration of visual and language planning yields better contextually aware task execution.} }
Endnote
%0 Conference Paper %T Using Left and Right Brains Together: Towards Vision and Language Planning %A Jun Cen %A Chenfei Wu %A Xiao Liu %A Shengming Yin %A Yixuan Pei %A Jinglong Yang %A Qifeng Chen %A Nan Duan %A Jianguo Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-cen24a %I PMLR %P 5982--6001 %U https://proceedings.mlr.press/v235/cen24a.html %V 235 %X Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have demonstrated remarkable decision masking capabilities on a variety of tasks. However, they inherently operate planning within the language space, lacking the vision and spatial imagination ability. In contrast, humans utilize both left and right hemispheres of the brain for language and visual planning during the thinking process. Therefore, we introduce a novel vision-language planning framework in this work to perform concurrent visual and language planning for tasks with inputs of any form. Our framework incorporates visual planning to capture intricate environmental details, while language planning enhances the logical coherence of the overall system. We evaluate the effectiveness of our framework across vision-language tasks, vision-only tasks, and language-only tasks. The results demonstrate the superior performance of our approach, indicating that the integration of visual and language planning yields better contextually aware task execution.
APA
Cen, J., Wu, C., Liu, X., Yin, S., Pei, Y., Yang, J., Chen, Q., Duan, N. & Zhang, J.. (2024). Using Left and Right Brains Together: Towards Vision and Language Planning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:5982-6001 Available from https://proceedings.mlr.press/v235/cen24a.html.

Related Material