EditLord: Learning Code Transformation Rules for Code Editing

Weichen Li, Albert Jan, Baishakhi Ray, Junfeng Yang, Chengzhi Mao, Kexin Pei
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:34876-34903, 2025.

Abstract

Code editing is a foundational task in software development, where its effectiveness depends on whether it introduces desired code property changes without changing the original code’s intended functionality. Existing approaches often formulate code editing as an implicit end-to-end task, omitting the fact that code-editing procedures inherently consist of discrete and explicit steps, and thus suffer from suboptimal performance and lack of robustness and generalization. We introduce EditLord, a code editing framework that makes the code transformation steps explicit. Our key insight is to employ a language model (LM) as an inductive learner to extract code editing rules from the training code pairs as concise meta-rule sets. Such rule sets will be manifested for each training sample to augment them for finetuning or assist in prompting- and iterative-based code editing. EditLord outperforms the state-of-the-art by an average of 22.7% in editing performance and 58.1% in robustness while achieving 20.2% higher functional correctness, across critical software engineering and security applications, LM models, and editing modes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-li25al, title = {{E}dit{L}ord: Learning Code Transformation Rules for Code Editing}, author = {Li, Weichen and Jan, Albert and Ray, Baishakhi and Yang, Junfeng and Mao, Chengzhi and Pei, Kexin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {34876--34903}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/li25al/li25al.pdf}, url = {https://proceedings.mlr.press/v267/li25al.html}, abstract = {Code editing is a foundational task in software development, where its effectiveness depends on whether it introduces desired code property changes without changing the original code’s intended functionality. Existing approaches often formulate code editing as an implicit end-to-end task, omitting the fact that code-editing procedures inherently consist of discrete and explicit steps, and thus suffer from suboptimal performance and lack of robustness and generalization. We introduce EditLord, a code editing framework that makes the code transformation steps explicit. Our key insight is to employ a language model (LM) as an inductive learner to extract code editing rules from the training code pairs as concise meta-rule sets. Such rule sets will be manifested for each training sample to augment them for finetuning or assist in prompting- and iterative-based code editing. EditLord outperforms the state-of-the-art by an average of 22.7% in editing performance and 58.1% in robustness while achieving 20.2% higher functional correctness, across critical software engineering and security applications, LM models, and editing modes.} }
Endnote
%0 Conference Paper %T EditLord: Learning Code Transformation Rules for Code Editing %A Weichen Li %A Albert Jan %A Baishakhi Ray %A Junfeng Yang %A Chengzhi Mao %A Kexin Pei %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-li25al %I PMLR %P 34876--34903 %U https://proceedings.mlr.press/v267/li25al.html %V 267 %X Code editing is a foundational task in software development, where its effectiveness depends on whether it introduces desired code property changes without changing the original code’s intended functionality. Existing approaches often formulate code editing as an implicit end-to-end task, omitting the fact that code-editing procedures inherently consist of discrete and explicit steps, and thus suffer from suboptimal performance and lack of robustness and generalization. We introduce EditLord, a code editing framework that makes the code transformation steps explicit. Our key insight is to employ a language model (LM) as an inductive learner to extract code editing rules from the training code pairs as concise meta-rule sets. Such rule sets will be manifested for each training sample to augment them for finetuning or assist in prompting- and iterative-based code editing. EditLord outperforms the state-of-the-art by an average of 22.7% in editing performance and 58.1% in robustness while achieving 20.2% higher functional correctness, across critical software engineering and security applications, LM models, and editing modes.
APA
Li, W., Jan, A., Ray, B., Yang, J., Mao, C. & Pei, K.. (2025). EditLord: Learning Code Transformation Rules for Code Editing. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:34876-34903 Available from https://proceedings.mlr.press/v267/li25al.html.

Related Material