Action-Dependent Optimality-Preserving Reward Shaping

Grant Collier Forbes, Jianxun Wang, Leonardo Villalobos-Arias, Arnav Jhala, David Roberts
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:17437-17451, 2025.

Abstract

Recent RL research has utilized reward shaping–particularly complex shaping rewards such as intrinsic motivation (IM)–to encourage agent exploration in sparse-reward environments. While often effective, “reward hacking” can lead to the shaping reward being optimized at the expense of the extrinsic reward, resulting in a suboptimal policy. Potential-Based Reward Shaping (PBRS) techniques such as Generalized Reward Matching (GRM) and Policy-Invariant Explicit Shaping (PIES) have mitigated this. These methods allow for implementing IM without altering optimal policies. In this work we show that they are effectively unsuitable for complex, exploration-heavy environments with long-duration episodes. To remedy this, we introduce Action-Dependent Optimality Preserving Shaping (ADOPS), a method of converting intrinsic rewards to an optimality-preserving form that allows agents to utilize IM more effectively in the extremely sparse environment of Montezuma’s Revenge. We also prove ADOPS accommodates reward shaping functions that cannot be written in a potential-based form: while PBRS-based methods require the cumulative discounted intrinsic return be independent of actions, ADOPS allows for intrinsic cumulative returns to be dependent on agents’ actions while still preserving the optimal policy set. We show how action-dependence enables ADOPS’s to preserve optimality while learning in complex, sparse-reward environments where other methods struggle.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-forbes25a, title = {Action-Dependent Optimality-Preserving Reward Shaping}, author = {Forbes, Grant Collier and Wang, Jianxun and Villalobos-Arias, Leonardo and Jhala, Arnav and Roberts, David}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {17437--17451}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/forbes25a/forbes25a.pdf}, url = {https://proceedings.mlr.press/v267/forbes25a.html}, abstract = {Recent RL research has utilized reward shaping–particularly complex shaping rewards such as intrinsic motivation (IM)–to encourage agent exploration in sparse-reward environments. While often effective, “reward hacking” can lead to the shaping reward being optimized at the expense of the extrinsic reward, resulting in a suboptimal policy. Potential-Based Reward Shaping (PBRS) techniques such as Generalized Reward Matching (GRM) and Policy-Invariant Explicit Shaping (PIES) have mitigated this. These methods allow for implementing IM without altering optimal policies. In this work we show that they are effectively unsuitable for complex, exploration-heavy environments with long-duration episodes. To remedy this, we introduce Action-Dependent Optimality Preserving Shaping (ADOPS), a method of converting intrinsic rewards to an optimality-preserving form that allows agents to utilize IM more effectively in the extremely sparse environment of Montezuma’s Revenge. We also prove ADOPS accommodates reward shaping functions that cannot be written in a potential-based form: while PBRS-based methods require the cumulative discounted intrinsic return be independent of actions, ADOPS allows for intrinsic cumulative returns to be dependent on agents’ actions while still preserving the optimal policy set. We show how action-dependence enables ADOPS’s to preserve optimality while learning in complex, sparse-reward environments where other methods struggle.} }
Endnote
%0 Conference Paper %T Action-Dependent Optimality-Preserving Reward Shaping %A Grant Collier Forbes %A Jianxun Wang %A Leonardo Villalobos-Arias %A Arnav Jhala %A David Roberts %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-forbes25a %I PMLR %P 17437--17451 %U https://proceedings.mlr.press/v267/forbes25a.html %V 267 %X Recent RL research has utilized reward shaping–particularly complex shaping rewards such as intrinsic motivation (IM)–to encourage agent exploration in sparse-reward environments. While often effective, “reward hacking” can lead to the shaping reward being optimized at the expense of the extrinsic reward, resulting in a suboptimal policy. Potential-Based Reward Shaping (PBRS) techniques such as Generalized Reward Matching (GRM) and Policy-Invariant Explicit Shaping (PIES) have mitigated this. These methods allow for implementing IM without altering optimal policies. In this work we show that they are effectively unsuitable for complex, exploration-heavy environments with long-duration episodes. To remedy this, we introduce Action-Dependent Optimality Preserving Shaping (ADOPS), a method of converting intrinsic rewards to an optimality-preserving form that allows agents to utilize IM more effectively in the extremely sparse environment of Montezuma’s Revenge. We also prove ADOPS accommodates reward shaping functions that cannot be written in a potential-based form: while PBRS-based methods require the cumulative discounted intrinsic return be independent of actions, ADOPS allows for intrinsic cumulative returns to be dependent on agents’ actions while still preserving the optimal policy set. We show how action-dependence enables ADOPS’s to preserve optimality while learning in complex, sparse-reward environments where other methods struggle.
APA
Forbes, G.C., Wang, J., Villalobos-Arias, L., Jhala, A. & Roberts, D.. (2025). Action-Dependent Optimality-Preserving Reward Shaping. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:17437-17451 Available from https://proceedings.mlr.press/v267/forbes25a.html.

Related Material