Deep Reinforcement Learning from Hierarchical Preference Design

Alexander Bukharin, Yixiao Li, Pengcheng He, Tuo Zhao
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:5875-5893, 2025.

Abstract

Reward design is a fundamental, yet challenging aspect of reinforcement learning (RL). Researchers typically utilize feedback signals from the environment to handcraft a reward function, but this process is not always effective due to the varying scale and intricate dependencies of the feedback signals. This paper shows by exploiting certain structures, one can ease the reward design process. Specifically, we propose a hierarchical reward design framework – HERON for scenarios: (I) The feedback signals naturally present hierarchy; (II) The reward is sparse, but with less important surrogate feedback to help policy learning. Both scenarios allow us to design a hierarchical decision tree induced by the importance ranking of the feedback signals to compare RL trajectories. With such preference data, we can then train a reward model for policy learning. We apply HERON to several RL applications, and we find that our framework can not only train high performing agents on a variety of difficult tasks, but also provide additional benefits such as improved sample efficiency and robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-bukharin25a, title = {Deep Reinforcement Learning from Hierarchical Preference Design}, author = {Bukharin, Alexander and Li, Yixiao and He, Pengcheng and Zhao, Tuo}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {5875--5893}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/bukharin25a/bukharin25a.pdf}, url = {https://proceedings.mlr.press/v267/bukharin25a.html}, abstract = {Reward design is a fundamental, yet challenging aspect of reinforcement learning (RL). Researchers typically utilize feedback signals from the environment to handcraft a reward function, but this process is not always effective due to the varying scale and intricate dependencies of the feedback signals. This paper shows by exploiting certain structures, one can ease the reward design process. Specifically, we propose a hierarchical reward design framework – HERON for scenarios: (I) The feedback signals naturally present hierarchy; (II) The reward is sparse, but with less important surrogate feedback to help policy learning. Both scenarios allow us to design a hierarchical decision tree induced by the importance ranking of the feedback signals to compare RL trajectories. With such preference data, we can then train a reward model for policy learning. We apply HERON to several RL applications, and we find that our framework can not only train high performing agents on a variety of difficult tasks, but also provide additional benefits such as improved sample efficiency and robustness.} }
Endnote
%0 Conference Paper %T Deep Reinforcement Learning from Hierarchical Preference Design %A Alexander Bukharin %A Yixiao Li %A Pengcheng He %A Tuo Zhao %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-bukharin25a %I PMLR %P 5875--5893 %U https://proceedings.mlr.press/v267/bukharin25a.html %V 267 %X Reward design is a fundamental, yet challenging aspect of reinforcement learning (RL). Researchers typically utilize feedback signals from the environment to handcraft a reward function, but this process is not always effective due to the varying scale and intricate dependencies of the feedback signals. This paper shows by exploiting certain structures, one can ease the reward design process. Specifically, we propose a hierarchical reward design framework – HERON for scenarios: (I) The feedback signals naturally present hierarchy; (II) The reward is sparse, but with less important surrogate feedback to help policy learning. Both scenarios allow us to design a hierarchical decision tree induced by the importance ranking of the feedback signals to compare RL trajectories. With such preference data, we can then train a reward model for policy learning. We apply HERON to several RL applications, and we find that our framework can not only train high performing agents on a variety of difficult tasks, but also provide additional benefits such as improved sample efficiency and robustness.
APA
Bukharin, A., Li, Y., He, P. & Zhao, T.. (2025). Deep Reinforcement Learning from Hierarchical Preference Design. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:5875-5893 Available from https://proceedings.mlr.press/v267/bukharin25a.html.

Related Material