GRAM: A Generative Foundation Reward Model for Reward Generalization

Chenglong Wang, Yang Gan, Yifu Huo, Yongyu Mu, Qiaozhi He, Murun Yang, Bei Li, Tong Xiao, Chunliang Zhang, Tongran Liu, Jingbo Zhu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:62916-62936, 2025.

Abstract

In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wang25ad, title = {{GRAM}: A Generative Foundation Reward Model for Reward Generalization}, author = {Wang, Chenglong and Gan, Yang and Huo, Yifu and Mu, Yongyu and He, Qiaozhi and Yang, Murun and Li, Bei and Xiao, Tong and Zhang, Chunliang and Liu, Tongran and Zhu, Jingbo}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {62916--62936}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wang25ad/wang25ad.pdf}, url = {https://proceedings.mlr.press/v267/wang25ad.html}, abstract = {In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.} }
Endnote
%0 Conference Paper %T GRAM: A Generative Foundation Reward Model for Reward Generalization %A Chenglong Wang %A Yang Gan %A Yifu Huo %A Yongyu Mu %A Qiaozhi He %A Murun Yang %A Bei Li %A Tong Xiao %A Chunliang Zhang %A Tongran Liu %A Jingbo Zhu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wang25ad %I PMLR %P 62916--62936 %U https://proceedings.mlr.press/v267/wang25ad.html %V 267 %X In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.
APA
Wang, C., Gan, Y., Huo, Y., Mu, Y., He, Q., Yang, M., Li, B., Xiao, T., Zhang, C., Liu, T. & Zhu, J.. (2025). GRAM: A Generative Foundation Reward Model for Reward Generalization. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:62916-62936 Available from https://proceedings.mlr.press/v267/wang25ad.html.

Related Material