Multi-Turn Code Generation Through Single-Step Rewards

Arnav Kumar Jain, Gonzalo Gonzalez-Pumariega, Wayne Chen, Alexander M Rush, Wenting Zhao, Sanjiban Choudhury
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:26700-26716, 2025.

Abstract

We address the problem of code generation from multi-turn execution feedback. Existing methods either generate code without feedback or use complex, hierarchical reinforcement learning to optimize multi-turn rewards. We propose a simple yet scalable approach, $\mu$CODE, that solves multi-turn code generation using only single-step rewards. Our key insight is that code generation is a one-step recoverable MDP, where the correct code can be recovered from any intermediate code state in a single turn. $\mu$CODE iteratively trains both a generator to provide code solutions conditioned on multi-turn execution feedback and a verifier to score the newly generated code. Experimental evaluations show that our approach achieves significant improvements over state-of-the-art baselines. We provide analysis of the design choices of the reward models and policy, and show the efficacy of $\mu$CODE at utilizing the execution feedback.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-jain25a, title = {Multi-Turn Code Generation Through Single-Step Rewards}, author = {Jain, Arnav Kumar and Gonzalez-Pumariega, Gonzalo and Chen, Wayne and Rush, Alexander M and Zhao, Wenting and Choudhury, Sanjiban}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {26700--26716}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/jain25a/jain25a.pdf}, url = {https://proceedings.mlr.press/v267/jain25a.html}, abstract = {We address the problem of code generation from multi-turn execution feedback. Existing methods either generate code without feedback or use complex, hierarchical reinforcement learning to optimize multi-turn rewards. We propose a simple yet scalable approach, $\mu$CODE, that solves multi-turn code generation using only single-step rewards. Our key insight is that code generation is a one-step recoverable MDP, where the correct code can be recovered from any intermediate code state in a single turn. $\mu$CODE iteratively trains both a generator to provide code solutions conditioned on multi-turn execution feedback and a verifier to score the newly generated code. Experimental evaluations show that our approach achieves significant improvements over state-of-the-art baselines. We provide analysis of the design choices of the reward models and policy, and show the efficacy of $\mu$CODE at utilizing the execution feedback.} }
Endnote
%0 Conference Paper %T Multi-Turn Code Generation Through Single-Step Rewards %A Arnav Kumar Jain %A Gonzalo Gonzalez-Pumariega %A Wayne Chen %A Alexander M Rush %A Wenting Zhao %A Sanjiban Choudhury %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-jain25a %I PMLR %P 26700--26716 %U https://proceedings.mlr.press/v267/jain25a.html %V 267 %X We address the problem of code generation from multi-turn execution feedback. Existing methods either generate code without feedback or use complex, hierarchical reinforcement learning to optimize multi-turn rewards. We propose a simple yet scalable approach, $\mu$CODE, that solves multi-turn code generation using only single-step rewards. Our key insight is that code generation is a one-step recoverable MDP, where the correct code can be recovered from any intermediate code state in a single turn. $\mu$CODE iteratively trains both a generator to provide code solutions conditioned on multi-turn execution feedback and a verifier to score the newly generated code. Experimental evaluations show that our approach achieves significant improvements over state-of-the-art baselines. We provide analysis of the design choices of the reward models and policy, and show the efficacy of $\mu$CODE at utilizing the execution feedback.
APA
Jain, A.K., Gonzalez-Pumariega, G., Chen, W., Rush, A.M., Zhao, W. & Choudhury, S.. (2025). Multi-Turn Code Generation Through Single-Step Rewards. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:26700-26716 Available from https://proceedings.mlr.press/v267/jain25a.html.

Related Material