Discriminative Policy Optimization for Token-Level Reward Models

Hongzhan Chen, Tao Yang, Shiping Gao, Ruijun Chen, Xiaojun Quan, Hongtao Tian, Ting Yao
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:9546-9565, 2025.

Abstract

Process reward models (PRMs) provide more nuanced supervision compared to outcome reward models (ORMs) for optimizing policy models, positioning them as a promising approach to enhancing the capabilities of LLMs in complex reasoning tasks. Recent efforts have advanced PRMs from step-level to token-level granularity by integrating reward modeling into the training of generative models, with reward scores derived from token generation probabilities. However, the conflict between generative language modeling and reward modeling may introduce instability and lead to inaccurate credit assignments. To address this challenge, we revisit token-level reward assignment by decoupling reward modeling from language generation and derive a token-level reward model through the optimization of a discriminative policy, termed the Q-function Reward Model (Q-RM). We theoretically demonstrate that Q-RM explicitly learns token-level Q-functions from preference data without relying on fine-grained annotations. In our experiments, Q-RM consistently outperforms all baseline methods across various benchmarks. For example, when integrated into PPO/REINFORCE algorithms, Q-RM enhances the average Pass@1 score by 5.85/4.70 points on mathematical reasoning tasks compared to the ORM baseline, and by 4.56/5.73 points compared to the token-level PRM counterpart. Moreover, reinforcement learning with Q-RM significantly enhances training efficiency, achieving convergence 12$\times$ faster than ORM on GSM8K and 11$\times$ faster than step-level PRM on MATH. Code and data are available at https://github.com/homzer/Q-RM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chen25ca, title = {Discriminative Policy Optimization for Token-Level Reward Models}, author = {Chen, Hongzhan and Yang, Tao and Gao, Shiping and Chen, Ruijun and Quan, Xiaojun and Tian, Hongtao and Yao, Ting}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {9546--9565}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chen25ca/chen25ca.pdf}, url = {https://proceedings.mlr.press/v267/chen25ca.html}, abstract = {Process reward models (PRMs) provide more nuanced supervision compared to outcome reward models (ORMs) for optimizing policy models, positioning them as a promising approach to enhancing the capabilities of LLMs in complex reasoning tasks. Recent efforts have advanced PRMs from step-level to token-level granularity by integrating reward modeling into the training of generative models, with reward scores derived from token generation probabilities. However, the conflict between generative language modeling and reward modeling may introduce instability and lead to inaccurate credit assignments. To address this challenge, we revisit token-level reward assignment by decoupling reward modeling from language generation and derive a token-level reward model through the optimization of a discriminative policy, termed the Q-function Reward Model (Q-RM). We theoretically demonstrate that Q-RM explicitly learns token-level Q-functions from preference data without relying on fine-grained annotations. In our experiments, Q-RM consistently outperforms all baseline methods across various benchmarks. For example, when integrated into PPO/REINFORCE algorithms, Q-RM enhances the average Pass@1 score by 5.85/4.70 points on mathematical reasoning tasks compared to the ORM baseline, and by 4.56/5.73 points compared to the token-level PRM counterpart. Moreover, reinforcement learning with Q-RM significantly enhances training efficiency, achieving convergence 12$\times$ faster than ORM on GSM8K and 11$\times$ faster than step-level PRM on MATH. Code and data are available at https://github.com/homzer/Q-RM.} }
Endnote
%0 Conference Paper %T Discriminative Policy Optimization for Token-Level Reward Models %A Hongzhan Chen %A Tao Yang %A Shiping Gao %A Ruijun Chen %A Xiaojun Quan %A Hongtao Tian %A Ting Yao %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chen25ca %I PMLR %P 9546--9565 %U https://proceedings.mlr.press/v267/chen25ca.html %V 267 %X Process reward models (PRMs) provide more nuanced supervision compared to outcome reward models (ORMs) for optimizing policy models, positioning them as a promising approach to enhancing the capabilities of LLMs in complex reasoning tasks. Recent efforts have advanced PRMs from step-level to token-level granularity by integrating reward modeling into the training of generative models, with reward scores derived from token generation probabilities. However, the conflict between generative language modeling and reward modeling may introduce instability and lead to inaccurate credit assignments. To address this challenge, we revisit token-level reward assignment by decoupling reward modeling from language generation and derive a token-level reward model through the optimization of a discriminative policy, termed the Q-function Reward Model (Q-RM). We theoretically demonstrate that Q-RM explicitly learns token-level Q-functions from preference data without relying on fine-grained annotations. In our experiments, Q-RM consistently outperforms all baseline methods across various benchmarks. For example, when integrated into PPO/REINFORCE algorithms, Q-RM enhances the average Pass@1 score by 5.85/4.70 points on mathematical reasoning tasks compared to the ORM baseline, and by 4.56/5.73 points compared to the token-level PRM counterpart. Moreover, reinforcement learning with Q-RM significantly enhances training efficiency, achieving convergence 12$\times$ faster than ORM on GSM8K and 11$\times$ faster than step-level PRM on MATH. Code and data are available at https://github.com/homzer/Q-RM.
APA
Chen, H., Yang, T., Gao, S., Chen, R., Quan, X., Tian, H. & Yao, T.. (2025). Discriminative Policy Optimization for Token-Level Reward Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:9546-9565 Available from https://proceedings.mlr.press/v267/chen25ca.html.

Related Material