Auto-Encoding Morph-Tokens for Multimodal LLM

Kaihang Pan, Siliang Tang, Juncheng Li, Zhaoyu Fan, Wei Chow, Shuicheng Yan, Tat-Seng Chua, Yueting Zhuang, Hanwang Zhang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:39308-39323, 2024.

Abstract

For multimodal LLMs, the synergy of visual comprehension (textual output) and generation (visual output) presents an ongoing challenge. This is due to a conflicting objective: for comprehension, an MLLM needs to abstract the visuals; for generation, it needs to preserve the visuals as much as possible. Thus, the objective is a dilemma for visual-tokens. To resolve the conflict, we propose encoding images into morph-tokens to serve a dual purpose: for comprehension, they act as visual prompts instructing MLLM to generate texts; for generation, they take on a different, non-conflicting role as complete visual-tokens for image reconstruction, where the missing visual cues are recovered by the MLLM. Extensive experiments show that morph-tokens can achieve a new SOTA for multimodal comprehension and generation simultaneously. Our project is available at https://github.com/DCDmllm/MorphTokens.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-pan24h, title = {Auto-Encoding Morph-Tokens for Multimodal {LLM}}, author = {Pan, Kaihang and Tang, Siliang and Li, Juncheng and Fan, Zhaoyu and Chow, Wei and Yan, Shuicheng and Chua, Tat-Seng and Zhuang, Yueting and Zhang, Hanwang}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {39308--39323}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24h/pan24h.pdf}, url = {https://proceedings.mlr.press/v235/pan24h.html}, abstract = {For multimodal LLMs, the synergy of visual comprehension (textual output) and generation (visual output) presents an ongoing challenge. This is due to a conflicting objective: for comprehension, an MLLM needs to abstract the visuals; for generation, it needs to preserve the visuals as much as possible. Thus, the objective is a dilemma for visual-tokens. To resolve the conflict, we propose encoding images into morph-tokens to serve a dual purpose: for comprehension, they act as visual prompts instructing MLLM to generate texts; for generation, they take on a different, non-conflicting role as complete visual-tokens for image reconstruction, where the missing visual cues are recovered by the MLLM. Extensive experiments show that morph-tokens can achieve a new SOTA for multimodal comprehension and generation simultaneously. Our project is available at https://github.com/DCDmllm/MorphTokens.} }
Endnote
%0 Conference Paper %T Auto-Encoding Morph-Tokens for Multimodal LLM %A Kaihang Pan %A Siliang Tang %A Juncheng Li %A Zhaoyu Fan %A Wei Chow %A Shuicheng Yan %A Tat-Seng Chua %A Yueting Zhuang %A Hanwang Zhang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-pan24h %I PMLR %P 39308--39323 %U https://proceedings.mlr.press/v235/pan24h.html %V 235 %X For multimodal LLMs, the synergy of visual comprehension (textual output) and generation (visual output) presents an ongoing challenge. This is due to a conflicting objective: for comprehension, an MLLM needs to abstract the visuals; for generation, it needs to preserve the visuals as much as possible. Thus, the objective is a dilemma for visual-tokens. To resolve the conflict, we propose encoding images into morph-tokens to serve a dual purpose: for comprehension, they act as visual prompts instructing MLLM to generate texts; for generation, they take on a different, non-conflicting role as complete visual-tokens for image reconstruction, where the missing visual cues are recovered by the MLLM. Extensive experiments show that morph-tokens can achieve a new SOTA for multimodal comprehension and generation simultaneously. Our project is available at https://github.com/DCDmllm/MorphTokens.
APA
Pan, K., Tang, S., Li, J., Fan, Z., Chow, W., Yan, S., Chua, T., Zhuang, Y. & Zhang, H.. (2024). Auto-Encoding Morph-Tokens for Multimodal LLM. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:39308-39323 Available from https://proceedings.mlr.press/v235/pan24h.html.

Related Material