Activation Steering Meets Preference Optimization: Defense Against Jailbreaks in Vision Language Model

Sihao Wu, Gaojie Jin, Wei Huang, Jianhong Wang, Xiaowei Huang
Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304:17-32, 2025.

Abstract

Vision Language Models (VLMs) have demonstrated impressive capabilities in integrating visual and textual information for understanding and reasoning, but remain highly vulnerable to adversarial attacks. While activation steering has emerged as a promising defence, existing approaches often rely on task-specific contrastive prompts to extract harmful directions, which exhibit suboptimal performance and can degrade visual grounding performance. To address these limitations, we propose \\textit\{Sequence-Level Preference Optimization\} for VLM (\\textit\{SPO-VLM\}), a novel two-stage defense framework that combines activation-level intervention with policy-level optimization to enhance model robustness. In \\textit\{Stage I\}, we compute adaptive layer-specific steering vectors from diverse data sources, enabling generalized suppression of harmful behaviors during inference. In \\textit\{Stage II\}, we refine these steering vectors through a sequence-level preference optimization process. This stage integrates automated toxicity assessment, as well as visual-consistency rewards based on caption-image alignment, to achieve safe and semantically grounded text generation. The two-stage structure of SPO-VLM balances efficiency and effectiveness by combining a lightweight mitigation foundation in Stage I with deeper policy refinement in Stage II. Extensive experiments shown SPO-VLM enhances safety against attacks via activation steering and preference optimization, while maintaining strong performance on benign tasks without compromising visual understanding capabilities. We will release our code, model weights, and evaluation toolkit to support reproducibility and future research. \\textcolor\{red\}\{Warning: This paper may contain examples of offensive or harmful text and images.\}

Cite this Paper


BibTeX
@InProceedings{pmlr-v304-wu25a, title = {Activation Steering Meets Preference Optimization: Defense Against Jailbreaks in Vision Language Model}, author = {Wu, Sihao and Jin, Gaojie and Huang, Wei and Wang, Jianhong and Huang, Xiaowei}, booktitle = {Proceedings of the 17th Asian Conference on Machine Learning}, pages = {17--32}, year = {2025}, editor = {Lee, Hung-yi and Liu, Tongliang}, volume = {304}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v304/main/assets/wu25a/wu25a.pdf}, url = {https://proceedings.mlr.press/v304/wu25a.html}, abstract = {Vision Language Models (VLMs) have demonstrated impressive capabilities in integrating visual and textual information for understanding and reasoning, but remain highly vulnerable to adversarial attacks. While activation steering has emerged as a promising defence, existing approaches often rely on task-specific contrastive prompts to extract harmful directions, which exhibit suboptimal performance and can degrade visual grounding performance. To address these limitations, we propose \\textit\{Sequence-Level Preference Optimization\} for VLM (\\textit\{SPO-VLM\}), a novel two-stage defense framework that combines activation-level intervention with policy-level optimization to enhance model robustness. In \\textit\{Stage I\}, we compute adaptive layer-specific steering vectors from diverse data sources, enabling generalized suppression of harmful behaviors during inference. In \\textit\{Stage II\}, we refine these steering vectors through a sequence-level preference optimization process. This stage integrates automated toxicity assessment, as well as visual-consistency rewards based on caption-image alignment, to achieve safe and semantically grounded text generation. The two-stage structure of SPO-VLM balances efficiency and effectiveness by combining a lightweight mitigation foundation in Stage I with deeper policy refinement in Stage II. Extensive experiments shown SPO-VLM enhances safety against attacks via activation steering and preference optimization, while maintaining strong performance on benign tasks without compromising visual understanding capabilities. We will release our code, model weights, and evaluation toolkit to support reproducibility and future research. \\textcolor\{red\}\{Warning: This paper may contain examples of offensive or harmful text and images.\}} }
Endnote
%0 Conference Paper %T Activation Steering Meets Preference Optimization: Defense Against Jailbreaks in Vision Language Model %A Sihao Wu %A Gaojie Jin %A Wei Huang %A Jianhong Wang %A Xiaowei Huang %B Proceedings of the 17th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Hung-yi Lee %E Tongliang Liu %F pmlr-v304-wu25a %I PMLR %P 17--32 %U https://proceedings.mlr.press/v304/wu25a.html %V 304 %X Vision Language Models (VLMs) have demonstrated impressive capabilities in integrating visual and textual information for understanding and reasoning, but remain highly vulnerable to adversarial attacks. While activation steering has emerged as a promising defence, existing approaches often rely on task-specific contrastive prompts to extract harmful directions, which exhibit suboptimal performance and can degrade visual grounding performance. To address these limitations, we propose \\textit\{Sequence-Level Preference Optimization\} for VLM (\\textit\{SPO-VLM\}), a novel two-stage defense framework that combines activation-level intervention with policy-level optimization to enhance model robustness. In \\textit\{Stage I\}, we compute adaptive layer-specific steering vectors from diverse data sources, enabling generalized suppression of harmful behaviors during inference. In \\textit\{Stage II\}, we refine these steering vectors through a sequence-level preference optimization process. This stage integrates automated toxicity assessment, as well as visual-consistency rewards based on caption-image alignment, to achieve safe and semantically grounded text generation. The two-stage structure of SPO-VLM balances efficiency and effectiveness by combining a lightweight mitigation foundation in Stage I with deeper policy refinement in Stage II. Extensive experiments shown SPO-VLM enhances safety against attacks via activation steering and preference optimization, while maintaining strong performance on benign tasks without compromising visual understanding capabilities. We will release our code, model weights, and evaluation toolkit to support reproducibility and future research. \\textcolor\{red\}\{Warning: This paper may contain examples of offensive or harmful text and images.\}
APA
Wu, S., Jin, G., Huang, W., Wang, J. & Huang, X.. (2025). Activation Steering Meets Preference Optimization: Defense Against Jailbreaks in Vision Language Model. Proceedings of the 17th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 304:17-32 Available from https://proceedings.mlr.press/v304/wu25a.html.

Related Material