From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection

Lincan Cai, Jingxuan Kang, Shuang Li, Wenxuan Ma, Binhui Xie, Zhida Qin, Jian Liang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:6229-6242, 2025.

Abstract

Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose an Attention-Based Selection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment. ABS achieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably, ABS is training-free and even rivals few-shot and test-time adaptation methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cai25d, title = {From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection}, author = {Cai, Lincan and Kang, Jingxuan and Li, Shuang and Ma, Wenxuan and Xie, Binhui and Qin, Zhida and Liang, Jian}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {6229--6242}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cai25d/cai25d.pdf}, url = {https://proceedings.mlr.press/v267/cai25d.html}, abstract = {Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose an Attention-Based Selection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment. ABS achieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably, ABS is training-free and even rivals few-shot and test-time adaptation methods.} }
Endnote
%0 Conference Paper %T From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection %A Lincan Cai %A Jingxuan Kang %A Shuang Li %A Wenxuan Ma %A Binhui Xie %A Zhida Qin %A Jian Liang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cai25d %I PMLR %P 6229--6242 %U https://proceedings.mlr.press/v267/cai25d.html %V 267 %X Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose an Attention-Based Selection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment. ABS achieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably, ABS is training-free and even rivals few-shot and test-time adaptation methods.
APA
Cai, L., Kang, J., Li, S., Ma, W., Xie, B., Qin, Z. & Liang, J.. (2025). From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:6229-6242 Available from https://proceedings.mlr.press/v267/cai25d.html.

Related Material