Unifying Vision-Language Latents for Zero-label Image Caption Enhancement

Sanghyun Byun, Jung Ick Guack, Mohanad Odema, Baisub Lee, Jacob Song, Woo Seong Chung
Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models, PMLR 322:248-261, 2026.

Abstract

Vision-language models (VLMs) achieve remarkable performance through large-scale image–text pretraining. However, their reliance on labeled image datasets limits scalability and leaves vast amounts of unlabeled image data underutilized. To address this, we propose Unified Vision-Language Alignment for Zero-Label Enhancement (ViZer), an enhancement training framework that enables zero-label learning in image captioning, providing a practical starting point for broader zero-label adaptation in vision-language tasks. Unlike prior approaches that rely on human or synthetically annotated datasets, ViZer actively aligns vision and language representation features during training, enabling existing VLMs to generate improved captions without requiring text labels or full retraining. We demonstrate ViZer’s advantage in qualitative evaluation, as automated caption metrics such as CIDEr and BERTScore often penalize details that are absent in reference captions. Applying ViZer on SmolVLM-Base and Qwen2-VL, we observe consistent qualitative improvements, producing captions that are more grounded and descriptive than their baseline.

Cite this Paper


BibTeX
@InProceedings{pmlr-v322-byun26a, title = {Unifying Vision-Language Latents for Zero-label Image Caption Enhancement}, author = {Byun, Sanghyun and Guack, Jung Ick and Odema, Mohanad and Lee, Baisub and Song, Jacob and Chung, Woo Seong}, booktitle = {Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models}, pages = {248--261}, year = {2026}, editor = {Fumero, Marco and Domine, Clementine and L"ahner, Zorah and Cannistraci, Irene and Zhao, Bo and Williams, Alex}, volume = {322}, series = {Proceedings of Machine Learning Research}, month = {06 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v322/main/assets/byun26a/byun26a.pdf}, url = {https://proceedings.mlr.press/v322/byun26a.html}, abstract = {Vision-language models (VLMs) achieve remarkable performance through large-scale image–text pretraining. However, their reliance on labeled image datasets limits scalability and leaves vast amounts of unlabeled image data underutilized. To address this, we propose Unified Vision-Language Alignment for Zero-Label Enhancement (ViZer), an enhancement training framework that enables zero-label learning in image captioning, providing a practical starting point for broader zero-label adaptation in vision-language tasks. Unlike prior approaches that rely on human or synthetically annotated datasets, ViZer actively aligns vision and language representation features during training, enabling existing VLMs to generate improved captions without requiring text labels or full retraining. We demonstrate ViZer’s advantage in qualitative evaluation, as automated caption metrics such as CIDEr and BERTScore often penalize details that are absent in reference captions. Applying ViZer on SmolVLM-Base and Qwen2-VL, we observe consistent qualitative improvements, producing captions that are more grounded and descriptive than their baseline.} }
Endnote
%0 Conference Paper %T Unifying Vision-Language Latents for Zero-label Image Caption Enhancement %A Sanghyun Byun %A Jung Ick Guack %A Mohanad Odema %A Baisub Lee %A Jacob Song %A Woo Seong Chung %B Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2026 %E Marco Fumero %E Clementine Domine %E Zorah L"ahner %E Irene Cannistraci %E Bo Zhao %E Alex Williams %F pmlr-v322-byun26a %I PMLR %P 248--261 %U https://proceedings.mlr.press/v322/byun26a.html %V 322 %X Vision-language models (VLMs) achieve remarkable performance through large-scale image–text pretraining. However, their reliance on labeled image datasets limits scalability and leaves vast amounts of unlabeled image data underutilized. To address this, we propose Unified Vision-Language Alignment for Zero-Label Enhancement (ViZer), an enhancement training framework that enables zero-label learning in image captioning, providing a practical starting point for broader zero-label adaptation in vision-language tasks. Unlike prior approaches that rely on human or synthetically annotated datasets, ViZer actively aligns vision and language representation features during training, enabling existing VLMs to generate improved captions without requiring text labels or full retraining. We demonstrate ViZer’s advantage in qualitative evaluation, as automated caption metrics such as CIDEr and BERTScore often penalize details that are absent in reference captions. Applying ViZer on SmolVLM-Base and Qwen2-VL, we observe consistent qualitative improvements, producing captions that are more grounded and descriptive than their baseline.
APA
Byun, S., Guack, J.I., Odema, M., Lee, B., Song, J. & Chung, W.S.. (2026). Unifying Vision-Language Latents for Zero-label Image Caption Enhancement. Proceedings of UniReps: the Third Edition of the Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 322:248-261 Available from https://proceedings.mlr.press/v322/byun26a.html.

Related Material