Optimizing Language Models for Inference Time Objectives using Reinforcement Learning

Yunhao Tang, Kunhao Zheng, Gabriel Synnaeve, Remi Munos
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:59066-59085, 2025.

Abstract

In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-tang25o, title = {Optimizing Language Models for Inference Time Objectives using Reinforcement Learning}, author = {Tang, Yunhao and Zheng, Kunhao and Synnaeve, Gabriel and Munos, Remi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {59066--59085}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/tang25o/tang25o.pdf}, url = {https://proceedings.mlr.press/v267/tang25o.html}, abstract = {In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.} }
Endnote
%0 Conference Paper %T Optimizing Language Models for Inference Time Objectives using Reinforcement Learning %A Yunhao Tang %A Kunhao Zheng %A Gabriel Synnaeve %A Remi Munos %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-tang25o %I PMLR %P 59066--59085 %U https://proceedings.mlr.press/v267/tang25o.html %V 267 %X In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.
APA
Tang, Y., Zheng, K., Synnaeve, G. & Munos, R.. (2025). Optimizing Language Models for Inference Time Objectives using Reinforcement Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:59066-59085 Available from https://proceedings.mlr.press/v267/tang25o.html.

Related Material