Calibrated Value-Aware Model Learning with Probabilistic Environment Models

Claas A Voelcker, Anastasiia Pedan, Arash Ahmadian, Romina Abachi, Igor Gilitschenski, Amir-Massoud Farahmand
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:61745-61768, 2025.

Abstract

The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcement learning. The MuZero loss, which penalizes a model’s value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-voelcker25a, title = {Calibrated Value-Aware Model Learning with Probabilistic Environment Models}, author = {Voelcker, Claas A and Pedan, Anastasiia and Ahmadian, Arash and Abachi, Romina and Gilitschenski, Igor and Farahmand, Amir-Massoud}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {61745--61768}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/voelcker25a/voelcker25a.pdf}, url = {https://proceedings.mlr.press/v267/voelcker25a.html}, abstract = {The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcement learning. The MuZero loss, which penalizes a model’s value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.} }
Endnote
%0 Conference Paper %T Calibrated Value-Aware Model Learning with Probabilistic Environment Models %A Claas A Voelcker %A Anastasiia Pedan %A Arash Ahmadian %A Romina Abachi %A Igor Gilitschenski %A Amir-Massoud Farahmand %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-voelcker25a %I PMLR %P 61745--61768 %U https://proceedings.mlr.press/v267/voelcker25a.html %V 267 %X The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcement learning. The MuZero loss, which penalizes a model’s value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.
APA
Voelcker, C.A., Pedan, A., Ahmadian, A., Abachi, R., Gilitschenski, I. & Farahmand, A.. (2025). Calibrated Value-Aware Model Learning with Probabilistic Environment Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:61745-61768 Available from https://proceedings.mlr.press/v267/voelcker25a.html.

Related Material