When Can Proxies Improve the Sample Complexity of Preference Learning?

Yuchen Zhu, Daniel Augusto De Souza, Zhengyan Shi, Mengyue Yang, Pasquale Minervini, Matt Kusner, Alexander D’Amour
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:79790-79814, 2025.

Abstract

We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward. This is a key concern for Large Language Models (LLMs), as they are often fine-tuned on human preferences that may not accurately reflect a true objective. Existing work uses various tricks such as regularisation, tweaks to the reward model, and reward hacking detectors, to limit the influence that such proxy preferences have on a model. Luckily, in many contexts such as medicine, education, and law, a sparse amount of expert data is often available. In these cases, it is often unclear whether the addition of proxy data can improve policy learning. We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy. These conditions can inform the data collection process for specific tasks. The result implies a parameterisation for LLMs that achieves this improved sample complexity. We detail how one can adapt existing architectures to yield this improved sample complexity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhu25f, title = {When Can Proxies Improve the Sample Complexity of Preference Learning?}, author = {Zhu, Yuchen and De Souza, Daniel Augusto and Shi, Zhengyan and Yang, Mengyue and Minervini, Pasquale and Kusner, Matt and D'Amour, Alexander}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {79790--79814}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhu25f/zhu25f.pdf}, url = {https://proceedings.mlr.press/v267/zhu25f.html}, abstract = {We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward. This is a key concern for Large Language Models (LLMs), as they are often fine-tuned on human preferences that may not accurately reflect a true objective. Existing work uses various tricks such as regularisation, tweaks to the reward model, and reward hacking detectors, to limit the influence that such proxy preferences have on a model. Luckily, in many contexts such as medicine, education, and law, a sparse amount of expert data is often available. In these cases, it is often unclear whether the addition of proxy data can improve policy learning. We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy. These conditions can inform the data collection process for specific tasks. The result implies a parameterisation for LLMs that achieves this improved sample complexity. We detail how one can adapt existing architectures to yield this improved sample complexity.} }
Endnote
%0 Conference Paper %T When Can Proxies Improve the Sample Complexity of Preference Learning? %A Yuchen Zhu %A Daniel Augusto De Souza %A Zhengyan Shi %A Mengyue Yang %A Pasquale Minervini %A Matt Kusner %A Alexander D’Amour %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhu25f %I PMLR %P 79790--79814 %U https://proceedings.mlr.press/v267/zhu25f.html %V 267 %X We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward. This is a key concern for Large Language Models (LLMs), as they are often fine-tuned on human preferences that may not accurately reflect a true objective. Existing work uses various tricks such as regularisation, tweaks to the reward model, and reward hacking detectors, to limit the influence that such proxy preferences have on a model. Luckily, in many contexts such as medicine, education, and law, a sparse amount of expert data is often available. In these cases, it is often unclear whether the addition of proxy data can improve policy learning. We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy. These conditions can inform the data collection process for specific tasks. The result implies a parameterisation for LLMs that achieves this improved sample complexity. We detail how one can adapt existing architectures to yield this improved sample complexity.
APA
Zhu, Y., De Souza, D.A., Shi, Z., Yang, M., Minervini, P., Kusner, M. & D’Amour, A.. (2025). When Can Proxies Improve the Sample Complexity of Preference Learning?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:79790-79814 Available from https://proceedings.mlr.press/v267/zhu25f.html.

Related Material