CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization

Nay Myat Min, Long H. Pham, Yige Li, Jun Sun
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:44272-44291, 2025.

Abstract

Large Language Models (LLMs) are vulnerable to backdoor attacks that manipulate outputs via hidden triggers. Existing defense methods—designed for vision/text classification tasks—fail for text generation. We propose Internal Consistency Regularization (CROW), a defense leveraging the observation that backdoored models exhibit unstable layer-wise hidden representations when triggered, while clean models show smooth transitions. CROW enforces consistency across layers via adversarial perturbations and regularization during finetuning, neutralizing backdoors without requiring clean reference models or trigger knowledge—only a small clean dataset. Experiments across Llama-2 (7B, 13B), CodeLlama (7B, 13B), and Mistral-7B demonstrate CROW’s effectiveness: it achieves significant reductions in attack success rates across diverse backdoor strategies (sentiment steering, targeted refusal, code injection) while preserving generative performance. CROW’s architecture-agnostic design enables practical deployment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-min25b, title = {{CROW}: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization}, author = {Min, Nay Myat and Pham, Long H. and Li, Yige and Sun, Jun}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {44272--44291}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/min25b/min25b.pdf}, url = {https://proceedings.mlr.press/v267/min25b.html}, abstract = {Large Language Models (LLMs) are vulnerable to backdoor attacks that manipulate outputs via hidden triggers. Existing defense methods—designed for vision/text classification tasks—fail for text generation. We propose Internal Consistency Regularization (CROW), a defense leveraging the observation that backdoored models exhibit unstable layer-wise hidden representations when triggered, while clean models show smooth transitions. CROW enforces consistency across layers via adversarial perturbations and regularization during finetuning, neutralizing backdoors without requiring clean reference models or trigger knowledge—only a small clean dataset. Experiments across Llama-2 (7B, 13B), CodeLlama (7B, 13B), and Mistral-7B demonstrate CROW’s effectiveness: it achieves significant reductions in attack success rates across diverse backdoor strategies (sentiment steering, targeted refusal, code injection) while preserving generative performance. CROW’s architecture-agnostic design enables practical deployment.} }
Endnote
%0 Conference Paper %T CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization %A Nay Myat Min %A Long H. Pham %A Yige Li %A Jun Sun %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-min25b %I PMLR %P 44272--44291 %U https://proceedings.mlr.press/v267/min25b.html %V 267 %X Large Language Models (LLMs) are vulnerable to backdoor attacks that manipulate outputs via hidden triggers. Existing defense methods—designed for vision/text classification tasks—fail for text generation. We propose Internal Consistency Regularization (CROW), a defense leveraging the observation that backdoored models exhibit unstable layer-wise hidden representations when triggered, while clean models show smooth transitions. CROW enforces consistency across layers via adversarial perturbations and regularization during finetuning, neutralizing backdoors without requiring clean reference models or trigger knowledge—only a small clean dataset. Experiments across Llama-2 (7B, 13B), CodeLlama (7B, 13B), and Mistral-7B demonstrate CROW’s effectiveness: it achieves significant reductions in attack success rates across diverse backdoor strategies (sentiment steering, targeted refusal, code injection) while preserving generative performance. CROW’s architecture-agnostic design enables practical deployment.
APA
Min, N.M., Pham, L.H., Li, Y. & Sun, J.. (2025). CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:44272-44291 Available from https://proceedings.mlr.press/v267/min25b.html.

Related Material