Larger or Smaller Reward Margins to Select Preferences for LLM Alignment?

Kexin Huang, Junkang Wu, Ziqian Chen, Xue Wang, Jinyang Gao, Bolin Ding, Jiancan Wu, Xiangnan He, Xiang Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:25922-25946, 2025.

Abstract

Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either explicit or implicit reward margins, their single-margin focus often leads to contradictory evaluations for the same data. To address this issue, we propose a new metric of alignment potential, $M_{AP}$, which integrates both margins to quantify the gap from the model’s current implicit reward margin to the target explicit reward margin, thereby estimating the model’s potential to align on the preference data. Empirical results demonstrate that training on the data selected by $M_{AP}$ consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives. Furthermore, our method can be extended to self-play data generation frameworks, where we use this metric to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-art methods across various training settings and demonstrates continuous improvements with increasing dataset size and training iterations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-huang25al, title = {Larger or Smaller Reward Margins to Select Preferences for {LLM} Alignment?}, author = {Huang, Kexin and Wu, Junkang and Chen, Ziqian and Wang, Xue and Gao, Jinyang and Ding, Bolin and Wu, Jiancan and He, Xiangnan and Wang, Xiang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {25922--25946}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/huang25al/huang25al.pdf}, url = {https://proceedings.mlr.press/v267/huang25al.html}, abstract = {Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either explicit or implicit reward margins, their single-margin focus often leads to contradictory evaluations for the same data. To address this issue, we propose a new metric of alignment potential, $M_{AP}$, which integrates both margins to quantify the gap from the model’s current implicit reward margin to the target explicit reward margin, thereby estimating the model’s potential to align on the preference data. Empirical results demonstrate that training on the data selected by $M_{AP}$ consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives. Furthermore, our method can be extended to self-play data generation frameworks, where we use this metric to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-art methods across various training settings and demonstrates continuous improvements with increasing dataset size and training iterations.} }
Endnote
%0 Conference Paper %T Larger or Smaller Reward Margins to Select Preferences for LLM Alignment? %A Kexin Huang %A Junkang Wu %A Ziqian Chen %A Xue Wang %A Jinyang Gao %A Bolin Ding %A Jiancan Wu %A Xiangnan He %A Xiang Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-huang25al %I PMLR %P 25922--25946 %U https://proceedings.mlr.press/v267/huang25al.html %V 267 %X Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either explicit or implicit reward margins, their single-margin focus often leads to contradictory evaluations for the same data. To address this issue, we propose a new metric of alignment potential, $M_{AP}$, which integrates both margins to quantify the gap from the model’s current implicit reward margin to the target explicit reward margin, thereby estimating the model’s potential to align on the preference data. Empirical results demonstrate that training on the data selected by $M_{AP}$ consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives. Furthermore, our method can be extended to self-play data generation frameworks, where we use this metric to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-art methods across various training settings and demonstrates continuous improvements with increasing dataset size and training iterations.
APA
Huang, K., Wu, J., Chen, Z., Wang, X., Gao, J., Ding, B., Wu, J., He, X. & Wang, X.. (2025). Larger or Smaller Reward Margins to Select Preferences for LLM Alignment?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:25922-25946 Available from https://proceedings.mlr.press/v267/huang25al.html.

Related Material