Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models

Saketh Bachu, Erfan Shayegani, Rohit Lal, Trishna Chakraborty, Arindam Dutta, Chengyu Song, Yue Dong, Nael B. Abu-Ghazaleh, Amit Roy-Chowdhury
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:2310-2334, 2025.

Abstract

Vision-language models (VLMs) have improved significantly in their capabilities, but their complex architecture makes their safety alignment challenging. In this paper, we reveal an uneven distribution of harmful information across the intermediate layers of the image encoder and show that skipping a certain set of layers and exiting early can increase the chance of the VLM generating harmful responses. We call it as “Image enCoder Early-exiT” based vulnerability (ICET). Our experiments across three VLMs: LLaVA-1.5, LLaVA-NeXT, and Llama 3.2 show that performing early exits from the image encoder significantly increases the likelihood of generating harmful outputs. To tackle this, we propose a simple yet effective modification of the Clipped-Proximal Policy Optimization (Clip-PPO) algorithm for performing layer-wise multi-modal RLHF for VLMs. We term this as Layer-Wise PPO (L-PPO). We evaluate our L-PPO algorithm across three multi-modal datasets and show that it consistently reduces the harmfulness caused by early exits.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-bachu25a, title = {Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models}, author = {Bachu, Saketh and Shayegani, Erfan and Lal, Rohit and Chakraborty, Trishna and Dutta, Arindam and Song, Chengyu and Dong, Yue and Abu-Ghazaleh, Nael B. and Roy-Chowdhury, Amit}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {2310--2334}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/bachu25a/bachu25a.pdf}, url = {https://proceedings.mlr.press/v267/bachu25a.html}, abstract = {Vision-language models (VLMs) have improved significantly in their capabilities, but their complex architecture makes their safety alignment challenging. In this paper, we reveal an uneven distribution of harmful information across the intermediate layers of the image encoder and show that skipping a certain set of layers and exiting early can increase the chance of the VLM generating harmful responses. We call it as “Image enCoder Early-exiT” based vulnerability (ICET). Our experiments across three VLMs: LLaVA-1.5, LLaVA-NeXT, and Llama 3.2 show that performing early exits from the image encoder significantly increases the likelihood of generating harmful outputs. To tackle this, we propose a simple yet effective modification of the Clipped-Proximal Policy Optimization (Clip-PPO) algorithm for performing layer-wise multi-modal RLHF for VLMs. We term this as Layer-Wise PPO (L-PPO). We evaluate our L-PPO algorithm across three multi-modal datasets and show that it consistently reduces the harmfulness caused by early exits.} }
Endnote
%0 Conference Paper %T Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models %A Saketh Bachu %A Erfan Shayegani %A Rohit Lal %A Trishna Chakraborty %A Arindam Dutta %A Chengyu Song %A Yue Dong %A Nael B. Abu-Ghazaleh %A Amit Roy-Chowdhury %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-bachu25a %I PMLR %P 2310--2334 %U https://proceedings.mlr.press/v267/bachu25a.html %V 267 %X Vision-language models (VLMs) have improved significantly in their capabilities, but their complex architecture makes their safety alignment challenging. In this paper, we reveal an uneven distribution of harmful information across the intermediate layers of the image encoder and show that skipping a certain set of layers and exiting early can increase the chance of the VLM generating harmful responses. We call it as “Image enCoder Early-exiT” based vulnerability (ICET). Our experiments across three VLMs: LLaVA-1.5, LLaVA-NeXT, and Llama 3.2 show that performing early exits from the image encoder significantly increases the likelihood of generating harmful outputs. To tackle this, we propose a simple yet effective modification of the Clipped-Proximal Policy Optimization (Clip-PPO) algorithm for performing layer-wise multi-modal RLHF for VLMs. We term this as Layer-Wise PPO (L-PPO). We evaluate our L-PPO algorithm across three multi-modal datasets and show that it consistently reduces the harmfulness caused by early exits.
APA
Bachu, S., Shayegani, E., Lal, R., Chakraborty, T., Dutta, A., Song, C., Dong, Y., Abu-Ghazaleh, N.B. & Roy-Chowdhury, A.. (2025). Layer-wise Alignment: Examining Safety Alignment Across Image Encoder Layers in Vision Language Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:2310-2334 Available from https://proceedings.mlr.press/v267/bachu25a.html.

Related Material