ProSec: Fortifying Code LLMs with Proactive Security Alignment

Xiangzhe Xu, Zian Su, Jinyao Guo, Kaiyuan Zhang, Zhenting Wang, Xiangyu Zhang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:69689-69704, 2025.

Abstract

While recent code-specific large language models (LLMs) have greatly enhanced their code generation capabilities, the safety of these models remains under-explored, posing potential risks as insecure code generated by these models may introduce vulnerabilities into real-world systems. Existing methods collect security-focused datasets from real-world vulnerabilities for instruction tuning in order to mitigate such issues. However, they are largely constrained by the data sparsity of vulnerable code, and have limited applicability in the multi-stage post-training workflows of modern LLMs. In this paper, we propose ProSec, a novel proactive security alignment approach designed to align code LLMs with secure coding practices. ProSec systematically exposes the vulnerabilities in a code LLM by synthesizing vulnerability-inducing coding scenarios from Common Weakness Enumerations (CWEs) and generates fixes to vulnerable code snippets, allowing the model to learn secure practices through preference learning objectives. The scenarios synthesized by ProSec trigger 25$\times$ more vulnerable code than a normal instruction-tuning dataset, resulting in a security-focused alignment dataset 7$\times$ larger than the previous work. Experiments show that models trained with ProSec are 25.2% to 35.4% more secure compared to previous work without degrading models’ utility.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-xu25aa, title = {{P}ro{S}ec: Fortifying Code {LLM}s with Proactive Security Alignment}, author = {Xu, Xiangzhe and Su, Zian and Guo, Jinyao and Zhang, Kaiyuan and Wang, Zhenting and Zhang, Xiangyu}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {69689--69704}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/xu25aa/xu25aa.pdf}, url = {https://proceedings.mlr.press/v267/xu25aa.html}, abstract = {While recent code-specific large language models (LLMs) have greatly enhanced their code generation capabilities, the safety of these models remains under-explored, posing potential risks as insecure code generated by these models may introduce vulnerabilities into real-world systems. Existing methods collect security-focused datasets from real-world vulnerabilities for instruction tuning in order to mitigate such issues. However, they are largely constrained by the data sparsity of vulnerable code, and have limited applicability in the multi-stage post-training workflows of modern LLMs. In this paper, we propose ProSec, a novel proactive security alignment approach designed to align code LLMs with secure coding practices. ProSec systematically exposes the vulnerabilities in a code LLM by synthesizing vulnerability-inducing coding scenarios from Common Weakness Enumerations (CWEs) and generates fixes to vulnerable code snippets, allowing the model to learn secure practices through preference learning objectives. The scenarios synthesized by ProSec trigger 25$\times$ more vulnerable code than a normal instruction-tuning dataset, resulting in a security-focused alignment dataset 7$\times$ larger than the previous work. Experiments show that models trained with ProSec are 25.2% to 35.4% more secure compared to previous work without degrading models’ utility.} }
Endnote
%0 Conference Paper %T ProSec: Fortifying Code LLMs with Proactive Security Alignment %A Xiangzhe Xu %A Zian Su %A Jinyao Guo %A Kaiyuan Zhang %A Zhenting Wang %A Xiangyu Zhang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-xu25aa %I PMLR %P 69689--69704 %U https://proceedings.mlr.press/v267/xu25aa.html %V 267 %X While recent code-specific large language models (LLMs) have greatly enhanced their code generation capabilities, the safety of these models remains under-explored, posing potential risks as insecure code generated by these models may introduce vulnerabilities into real-world systems. Existing methods collect security-focused datasets from real-world vulnerabilities for instruction tuning in order to mitigate such issues. However, they are largely constrained by the data sparsity of vulnerable code, and have limited applicability in the multi-stage post-training workflows of modern LLMs. In this paper, we propose ProSec, a novel proactive security alignment approach designed to align code LLMs with secure coding practices. ProSec systematically exposes the vulnerabilities in a code LLM by synthesizing vulnerability-inducing coding scenarios from Common Weakness Enumerations (CWEs) and generates fixes to vulnerable code snippets, allowing the model to learn secure practices through preference learning objectives. The scenarios synthesized by ProSec trigger 25$\times$ more vulnerable code than a normal instruction-tuning dataset, resulting in a security-focused alignment dataset 7$\times$ larger than the previous work. Experiments show that models trained with ProSec are 25.2% to 35.4% more secure compared to previous work without degrading models’ utility.
APA
Xu, X., Su, Z., Guo, J., Zhang, K., Wang, Z. & Zhang, X.. (2025). ProSec: Fortifying Code LLMs with Proactive Security Alignment. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:69689-69704 Available from https://proceedings.mlr.press/v267/xu25aa.html.

Related Material