[edit]
Optimizing Language Models for Inference Time Objectives using Reinforcement Learning
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:59066-59085, 2025.
Abstract
In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.