Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF

Banghua Zhu, Michael Jordan, Jiantao Jiao
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:62405-62428, 2024.

Abstract

Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique that aligns language models closely with human-centric values. The initial phase of RLHF involves learning human values using a reward model from ranking data. It is observed that the performance of the reward model degrades after one epoch of training, and optimizing too much against the learned reward model eventually hinders the true objective. This paper analyzes potential reasons behind the issues, and designs improved reward learning algorithm termed ’Iterative Data Smoothing’ (IDS). The core idea is that during each training epoch, we not only update the model with the data, but also update the date using the model, replacing hard labels with soft labels. Our empirical findings highlight the superior performance of this approach over the traditional methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhu24e, title = {Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in {RLHF}}, author = {Zhu, Banghua and Jordan, Michael and Jiao, Jiantao}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {62405--62428}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24e/zhu24e.pdf}, url = {https://proceedings.mlr.press/v235/zhu24e.html}, abstract = {Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique that aligns language models closely with human-centric values. The initial phase of RLHF involves learning human values using a reward model from ranking data. It is observed that the performance of the reward model degrades after one epoch of training, and optimizing too much against the learned reward model eventually hinders the true objective. This paper analyzes potential reasons behind the issues, and designs improved reward learning algorithm termed ’Iterative Data Smoothing’ (IDS). The core idea is that during each training epoch, we not only update the model with the data, but also update the date using the model, replacing hard labels with soft labels. Our empirical findings highlight the superior performance of this approach over the traditional methods.} }
Endnote
%0 Conference Paper %T Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF %A Banghua Zhu %A Michael Jordan %A Jiantao Jiao %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhu24e %I PMLR %P 62405--62428 %U https://proceedings.mlr.press/v235/zhu24e.html %V 235 %X Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique that aligns language models closely with human-centric values. The initial phase of RLHF involves learning human values using a reward model from ranking data. It is observed that the performance of the reward model degrades after one epoch of training, and optimizing too much against the learned reward model eventually hinders the true objective. This paper analyzes potential reasons behind the issues, and designs improved reward learning algorithm termed ’Iterative Data Smoothing’ (IDS). The core idea is that during each training epoch, we not only update the model with the data, but also update the date using the model, replacing hard labels with soft labels. Our empirical findings highlight the superior performance of this approach over the traditional methods.
APA
Zhu, B., Jordan, M. & Jiao, J.. (2024). Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:62405-62428 Available from https://proceedings.mlr.press/v235/zhu24e.html.

Related Material