[edit]
Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:25438-25473, 2025.
Abstract
Sample efficiency is critical for online Reinforcement Learning from Human Feedback (RLHF). While existing works investigate sample-efficient online exploration strategies, the potential of utilizing misspecified yet relevant reward models to accelerate learning remains underexplored. This paper studies how to transfer knowledge from those imperfect reward models in online RLHF. We start by identifying a novel property due to KL-regularization in the RLHF objective: a policy’s coverability of the optimal policy is captured by its sub-optimality. Building on this insight, we propose novel transfer learning principles and a theoretical algorithm—Transfer Policy Optimization (TPO)—with provable benefits compared to standard online learning. Empirically, inspired by our theoretical findings, we develop a win-rate-based transfer policy selection strategy with improved computational efficiency. Moreover, our empirical transfer learning technique is modular and can be integrated with various policy optimization methods, such as DPO, IPO and XPO, to further enhance their performance. We validate the effectiveness of our method through experiments on summarization tasks.