Mind the Gap: A Practical Attack on GGUF Quantization

Kazuki Egashira, Robin Staab, Mark Vero, Jingxuan He, Martin Vechev
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:15038-15065, 2025.

Abstract

With the increasing size of frontier LLMs, post-training quantization has become the standard for memory-efficient deployment. Recent work has shown that basic rounding-based quantization schemes pose security risks, as they can be exploited to inject malicious behaviors into quantized models that remain hidden in full precision. However, existing attacks cannot be applied to more complex quantization methods, such as the GGUF family used in the popular ollama and llama.cpp frameworks. In this work, we address this gap by introducing the first attack on GGUF. Our key insight is that the quantization error – the difference between the full-precision weights and their (de-)quantized version – provides sufficient flexibility to construct malicious quantized models that appear benign in full precision. Leveraging this, we develop an attack that trains the target malicious LLM while constraining its weights based on quantization errors. We demonstrate the effectiveness of our attack on three popular LLMs across nine GGUF quantization data types on three diverse attack scenarios: insecure code generation ($\Delta$=$88.7%$), targeted content injection ($\Delta$=$85.0%$), and benign instruction refusal ($\Delta$=$30.1%$). Our attack highlights that (1) the most widely used post-training quantization method is susceptible to adversarial interferences, and (2) the complexity of quantization schemes alone is insufficient as a defense.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-egashira25a, title = {Mind the Gap: A Practical Attack on {GGUF} Quantization}, author = {Egashira, Kazuki and Staab, Robin and Vero, Mark and He, Jingxuan and Vechev, Martin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {15038--15065}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/egashira25a/egashira25a.pdf}, url = {https://proceedings.mlr.press/v267/egashira25a.html}, abstract = {With the increasing size of frontier LLMs, post-training quantization has become the standard for memory-efficient deployment. Recent work has shown that basic rounding-based quantization schemes pose security risks, as they can be exploited to inject malicious behaviors into quantized models that remain hidden in full precision. However, existing attacks cannot be applied to more complex quantization methods, such as the GGUF family used in the popular ollama and llama.cpp frameworks. In this work, we address this gap by introducing the first attack on GGUF. Our key insight is that the quantization error – the difference between the full-precision weights and their (de-)quantized version – provides sufficient flexibility to construct malicious quantized models that appear benign in full precision. Leveraging this, we develop an attack that trains the target malicious LLM while constraining its weights based on quantization errors. We demonstrate the effectiveness of our attack on three popular LLMs across nine GGUF quantization data types on three diverse attack scenarios: insecure code generation ($\Delta$=$88.7%$), targeted content injection ($\Delta$=$85.0%$), and benign instruction refusal ($\Delta$=$30.1%$). Our attack highlights that (1) the most widely used post-training quantization method is susceptible to adversarial interferences, and (2) the complexity of quantization schemes alone is insufficient as a defense.} }
Endnote
%0 Conference Paper %T Mind the Gap: A Practical Attack on GGUF Quantization %A Kazuki Egashira %A Robin Staab %A Mark Vero %A Jingxuan He %A Martin Vechev %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-egashira25a %I PMLR %P 15038--15065 %U https://proceedings.mlr.press/v267/egashira25a.html %V 267 %X With the increasing size of frontier LLMs, post-training quantization has become the standard for memory-efficient deployment. Recent work has shown that basic rounding-based quantization schemes pose security risks, as they can be exploited to inject malicious behaviors into quantized models that remain hidden in full precision. However, existing attacks cannot be applied to more complex quantization methods, such as the GGUF family used in the popular ollama and llama.cpp frameworks. In this work, we address this gap by introducing the first attack on GGUF. Our key insight is that the quantization error – the difference between the full-precision weights and their (de-)quantized version – provides sufficient flexibility to construct malicious quantized models that appear benign in full precision. Leveraging this, we develop an attack that trains the target malicious LLM while constraining its weights based on quantization errors. We demonstrate the effectiveness of our attack on three popular LLMs across nine GGUF quantization data types on three diverse attack scenarios: insecure code generation ($\Delta$=$88.7%$), targeted content injection ($\Delta$=$85.0%$), and benign instruction refusal ($\Delta$=$30.1%$). Our attack highlights that (1) the most widely used post-training quantization method is susceptible to adversarial interferences, and (2) the complexity of quantization schemes alone is insufficient as a defense.
APA
Egashira, K., Staab, R., Vero, M., He, J. & Vechev, M.. (2025). Mind the Gap: A Practical Attack on GGUF Quantization. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:15038-15065 Available from https://proceedings.mlr.press/v267/egashira25a.html.

Related Material