Image Hijacks: Adversarial Images can Control Generative Models at Runtime

Luke Bailey, Euan Ong, Stuart Russell, Scott Emmons
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:2443-2455, 2024.

Abstract

Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. ’the Eiffel Tower is now located in Rome’) using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour matching to craft hijacks for four types of attack: forcing VLMs to generate outputs of the adversary’s choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-bailey24a, title = {Image Hijacks: Adversarial Images can Control Generative Models at Runtime}, author = {Bailey, Luke and Ong, Euan and Russell, Stuart and Emmons, Scott}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {2443--2455}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/bailey24a/bailey24a.pdf}, url = {https://proceedings.mlr.press/v235/bailey24a.html}, abstract = {Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. ’the Eiffel Tower is now located in Rome’) using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour matching to craft hijacks for four types of attack: forcing VLMs to generate outputs of the adversary’s choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.} }
Endnote
%0 Conference Paper %T Image Hijacks: Adversarial Images can Control Generative Models at Runtime %A Luke Bailey %A Euan Ong %A Stuart Russell %A Scott Emmons %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-bailey24a %I PMLR %P 2443--2455 %U https://proceedings.mlr.press/v235/bailey24a.html %V 235 %X Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. ’the Eiffel Tower is now located in Rome’) using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour matching to craft hijacks for four types of attack: forcing VLMs to generate outputs of the adversary’s choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.
APA
Bailey, L., Ong, E., Russell, S. & Emmons, S.. (2024). Image Hijacks: Adversarial Images can Control Generative Models at Runtime. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:2443-2455 Available from https://proceedings.mlr.press/v235/bailey24a.html.

Related Material