Defending LVLMs Against Vision Attacks Through Partial-Perception Supervision

Qi Zhou, Dongxia Wang, Tianlin Li, Yun Lin, Yang Liu, Jin Song Dong, Qing Guo
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:79254-79280, 2025.

Abstract

Recent studies have raised significant concerns regarding the vulnerability of Large Vision Language Models (LVLMs) to maliciously injected or perturbed input images, which can mislead their responses. Existing defense methods show that such vision attacks are sensitive to image modifications especially cropping, using majority voting across responses of modified images as corrected responses. However, these modifications often result in partial images and distort the semantics, which reduces response quality on clean images after voting. Instead of directly using responses from partial images for voting, we investigate using them to supervise (guide) the LVLM’s responses to the original images at inference time. We propose a black-box, training-free method called DPS (Defense through Partial-Perception Supervision). In this approach, the model is prompted using the responses generated by a model that perceives only a partial image. With DPS, the model can adjust its response based on partial image understanding when under attack, while confidently maintaining its original response for clean input. Empirical experiments show our method outperforms the baseline, cutting the average attack success rate by 76.3% across six datasets on three popular models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhou25z, title = {Defending {LVLM}s Against Vision Attacks Through Partial-Perception Supervision}, author = {Zhou, Qi and Wang, Dongxia and Li, Tianlin and Lin, Yun and Liu, Yang and Dong, Jin Song and Guo, Qing}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {79254--79280}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhou25z/zhou25z.pdf}, url = {https://proceedings.mlr.press/v267/zhou25z.html}, abstract = {Recent studies have raised significant concerns regarding the vulnerability of Large Vision Language Models (LVLMs) to maliciously injected or perturbed input images, which can mislead their responses. Existing defense methods show that such vision attacks are sensitive to image modifications especially cropping, using majority voting across responses of modified images as corrected responses. However, these modifications often result in partial images and distort the semantics, which reduces response quality on clean images after voting. Instead of directly using responses from partial images for voting, we investigate using them to supervise (guide) the LVLM’s responses to the original images at inference time. We propose a black-box, training-free method called DPS (Defense through Partial-Perception Supervision). In this approach, the model is prompted using the responses generated by a model that perceives only a partial image. With DPS, the model can adjust its response based on partial image understanding when under attack, while confidently maintaining its original response for clean input. Empirical experiments show our method outperforms the baseline, cutting the average attack success rate by 76.3% across six datasets on three popular models.} }
Endnote
%0 Conference Paper %T Defending LVLMs Against Vision Attacks Through Partial-Perception Supervision %A Qi Zhou %A Dongxia Wang %A Tianlin Li %A Yun Lin %A Yang Liu %A Jin Song Dong %A Qing Guo %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhou25z %I PMLR %P 79254--79280 %U https://proceedings.mlr.press/v267/zhou25z.html %V 267 %X Recent studies have raised significant concerns regarding the vulnerability of Large Vision Language Models (LVLMs) to maliciously injected or perturbed input images, which can mislead their responses. Existing defense methods show that such vision attacks are sensitive to image modifications especially cropping, using majority voting across responses of modified images as corrected responses. However, these modifications often result in partial images and distort the semantics, which reduces response quality on clean images after voting. Instead of directly using responses from partial images for voting, we investigate using them to supervise (guide) the LVLM’s responses to the original images at inference time. We propose a black-box, training-free method called DPS (Defense through Partial-Perception Supervision). In this approach, the model is prompted using the responses generated by a model that perceives only a partial image. With DPS, the model can adjust its response based on partial image understanding when under attack, while confidently maintaining its original response for clean input. Empirical experiments show our method outperforms the baseline, cutting the average attack success rate by 76.3% across six datasets on three popular models.
APA
Zhou, Q., Wang, D., Li, T., Lin, Y., Liu, Y., Dong, J.S. & Guo, Q.. (2025). Defending LVLMs Against Vision Attacks Through Partial-Perception Supervision. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:79254-79280 Available from https://proceedings.mlr.press/v267/zhou25z.html.

Related Material