Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels

Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtao Zhai, Weisi Lin
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:54015-54029, 2024.

Abstract

The explosion of visual content available online underscores the requirement for an accurate machine assessor to robustly evaluate scores across diverse types of visual contents. While recent studies have demonstrated the exceptional potentials of large multi-modality models (LMMs) on a wide range of related fields, in this work, we explore how to teach them for visual rating aligning with human opinions. Observing that human raters only learn and judge discrete text-defined levels in subjective studies, we propose to emulate this subjective process and teach LMMs with text-defined rating levels instead of scores. The proposed Q-Align achieves state-of-the-art accuracy on image quality assessment (IQA), image aesthetic assessment (IAA), as well as video quality assessment (VQA) under the original LMM structure. With the syllabus, we further unify the three tasks into one model, termed the OneAlign. Our experiments demonstrate the advantage of discrete levels over direct scores on training, and that LMMs can learn beyond the discrete levels and provide effective finer-grained evaluations. Code and weights will be released.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wu24ah, title = {Q-Align: Teaching {LMM}s for Visual Scoring via Discrete Text-Defined Levels}, author = {Wu, Haoning and Zhang, Zicheng and Zhang, Weixia and Chen, Chaofeng and Liao, Liang and Li, Chunyi and Gao, Yixuan and Wang, Annan and Zhang, Erli and Sun, Wenxiu and Yan, Qiong and Min, Xiongkuo and Zhai, Guangtao and Lin, Weisi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {54015--54029}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wu24ah/wu24ah.pdf}, url = {https://proceedings.mlr.press/v235/wu24ah.html}, abstract = {The explosion of visual content available online underscores the requirement for an accurate machine assessor to robustly evaluate scores across diverse types of visual contents. While recent studies have demonstrated the exceptional potentials of large multi-modality models (LMMs) on a wide range of related fields, in this work, we explore how to teach them for visual rating aligning with human opinions. Observing that human raters only learn and judge discrete text-defined levels in subjective studies, we propose to emulate this subjective process and teach LMMs with text-defined rating levels instead of scores. The proposed Q-Align achieves state-of-the-art accuracy on image quality assessment (IQA), image aesthetic assessment (IAA), as well as video quality assessment (VQA) under the original LMM structure. With the syllabus, we further unify the three tasks into one model, termed the OneAlign. Our experiments demonstrate the advantage of discrete levels over direct scores on training, and that LMMs can learn beyond the discrete levels and provide effective finer-grained evaluations. Code and weights will be released.} }
Endnote
%0 Conference Paper %T Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels %A Haoning Wu %A Zicheng Zhang %A Weixia Zhang %A Chaofeng Chen %A Liang Liao %A Chunyi Li %A Yixuan Gao %A Annan Wang %A Erli Zhang %A Wenxiu Sun %A Qiong Yan %A Xiongkuo Min %A Guangtao Zhai %A Weisi Lin %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wu24ah %I PMLR %P 54015--54029 %U https://proceedings.mlr.press/v235/wu24ah.html %V 235 %X The explosion of visual content available online underscores the requirement for an accurate machine assessor to robustly evaluate scores across diverse types of visual contents. While recent studies have demonstrated the exceptional potentials of large multi-modality models (LMMs) on a wide range of related fields, in this work, we explore how to teach them for visual rating aligning with human opinions. Observing that human raters only learn and judge discrete text-defined levels in subjective studies, we propose to emulate this subjective process and teach LMMs with text-defined rating levels instead of scores. The proposed Q-Align achieves state-of-the-art accuracy on image quality assessment (IQA), image aesthetic assessment (IAA), as well as video quality assessment (VQA) under the original LMM structure. With the syllabus, we further unify the three tasks into one model, termed the OneAlign. Our experiments demonstrate the advantage of discrete levels over direct scores on training, and that LMMs can learn beyond the discrete levels and provide effective finer-grained evaluations. Code and weights will be released.
APA
Wu, H., Zhang, Z., Zhang, W., Chen, C., Liao, L., Li, C., Gao, Y., Wang, A., Zhang, E., Sun, W., Yan, Q., Min, X., Zhai, G. & Lin, W.. (2024). Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:54015-54029 Available from https://proceedings.mlr.press/v235/wu24ah.html.

Related Material