Towards Efficient Exact Optimization of Language Model Alignment

Haozhe Ji, Cheng Lu, Yilin Niu, Pei Ke, Hongning Wang, Jun Zhu, Jie Tang, Minlie Huang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:21648-21671, 2024.

Abstract

The alignment of language models with human preferences is vital for their application in real-world tasks. The problem is formulated as optimizing the model’s policy to maximize the expected reward that reflects human preferences with minimal deviation from the initial policy. While considered as a straightforward solution, reinforcement learning (RL) suffers from high variance in policy updates, which impedes efficient policy improvement. Recently, direct preference optimization (DPO) was proposed to directly optimize the policy from preference data. However, we show that DPO derived based on the optimal solution of the problem leads to a compromised mean-seeking approximation of the optimal solution in practice. In this paper, we propose efficient exact optimization (EXO) of the alignment objective. EXO is guaranteed to optimize in the same direction as RL algorithms asymptotically for arbitrary policy parametrization. This leads to the same mode-seeking solution, while enables efficient optimization by circumventing the complexities of RL. We also compare our method to DPO with both theoretical and empirical analyses, and further demonstrate the advantages of our method over existing approaches on realistic human preference data. Code is available at https://github.com/haozheji/exact-optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-ji24c, title = {Towards Efficient Exact Optimization of Language Model Alignment}, author = {Ji, Haozhe and Lu, Cheng and Niu, Yilin and Ke, Pei and Wang, Hongning and Zhu, Jun and Tang, Jie and Huang, Minlie}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {21648--21671}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/ji24c/ji24c.pdf}, url = {https://proceedings.mlr.press/v235/ji24c.html}, abstract = {The alignment of language models with human preferences is vital for their application in real-world tasks. The problem is formulated as optimizing the model’s policy to maximize the expected reward that reflects human preferences with minimal deviation from the initial policy. While considered as a straightforward solution, reinforcement learning (RL) suffers from high variance in policy updates, which impedes efficient policy improvement. Recently, direct preference optimization (DPO) was proposed to directly optimize the policy from preference data. However, we show that DPO derived based on the optimal solution of the problem leads to a compromised mean-seeking approximation of the optimal solution in practice. In this paper, we propose efficient exact optimization (EXO) of the alignment objective. EXO is guaranteed to optimize in the same direction as RL algorithms asymptotically for arbitrary policy parametrization. This leads to the same mode-seeking solution, while enables efficient optimization by circumventing the complexities of RL. We also compare our method to DPO with both theoretical and empirical analyses, and further demonstrate the advantages of our method over existing approaches on realistic human preference data. Code is available at https://github.com/haozheji/exact-optimization.} }
Endnote
%0 Conference Paper %T Towards Efficient Exact Optimization of Language Model Alignment %A Haozhe Ji %A Cheng Lu %A Yilin Niu %A Pei Ke %A Hongning Wang %A Jun Zhu %A Jie Tang %A Minlie Huang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-ji24c %I PMLR %P 21648--21671 %U https://proceedings.mlr.press/v235/ji24c.html %V 235 %X The alignment of language models with human preferences is vital for their application in real-world tasks. The problem is formulated as optimizing the model’s policy to maximize the expected reward that reflects human preferences with minimal deviation from the initial policy. While considered as a straightforward solution, reinforcement learning (RL) suffers from high variance in policy updates, which impedes efficient policy improvement. Recently, direct preference optimization (DPO) was proposed to directly optimize the policy from preference data. However, we show that DPO derived based on the optimal solution of the problem leads to a compromised mean-seeking approximation of the optimal solution in practice. In this paper, we propose efficient exact optimization (EXO) of the alignment objective. EXO is guaranteed to optimize in the same direction as RL algorithms asymptotically for arbitrary policy parametrization. This leads to the same mode-seeking solution, while enables efficient optimization by circumventing the complexities of RL. We also compare our method to DPO with both theoretical and empirical analyses, and further demonstrate the advantages of our method over existing approaches on realistic human preference data. Code is available at https://github.com/haozheji/exact-optimization.
APA
Ji, H., Lu, C., Niu, Y., Ke, P., Wang, H., Zhu, J., Tang, J. & Huang, M.. (2024). Towards Efficient Exact Optimization of Language Model Alignment. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:21648-21671 Available from https://proceedings.mlr.press/v235/ji24c.html.

Related Material