Revisiting Bellman Errors for Offline Model Selection

Joshua P Zitovsky, Daniel De Marchi, Rishabh Agarwal, Michael Rene Kosorok
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:43369-43406, 2023.

Abstract

Offline model selection (OMS), that is, choosing the best policy from a set of many policies given only logged data, is crucial for applying offline RL in real-world settings. One idea that has been extensively explored is to select policies based on the mean squared Bellman error (MSBE) of the associated Q-functions. However, previous work has struggled to obtain adequate OMS performance with Bellman errors, leading many researchers to abandon the idea. To this end, we elucidate why previous work has seen pessimistic results with Bellman errors and identify conditions under which OMS algorithms based on Bellman errors will perform well. Moreover, we develop a new estimator of the MSBE that is more accurate than prior methods. Our estimator obtains impressive OMS performance on diverse discrete control tasks, including Atari games.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-zitovsky23a, title = {Revisiting {B}ellman Errors for Offline Model Selection}, author = {Zitovsky, Joshua P and De Marchi, Daniel and Agarwal, Rishabh and Kosorok, Michael Rene}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {43369--43406}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/zitovsky23a/zitovsky23a.pdf}, url = {https://proceedings.mlr.press/v202/zitovsky23a.html}, abstract = {Offline model selection (OMS), that is, choosing the best policy from a set of many policies given only logged data, is crucial for applying offline RL in real-world settings. One idea that has been extensively explored is to select policies based on the mean squared Bellman error (MSBE) of the associated Q-functions. However, previous work has struggled to obtain adequate OMS performance with Bellman errors, leading many researchers to abandon the idea. To this end, we elucidate why previous work has seen pessimistic results with Bellman errors and identify conditions under which OMS algorithms based on Bellman errors will perform well. Moreover, we develop a new estimator of the MSBE that is more accurate than prior methods. Our estimator obtains impressive OMS performance on diverse discrete control tasks, including Atari games.} }
Endnote
%0 Conference Paper %T Revisiting Bellman Errors for Offline Model Selection %A Joshua P Zitovsky %A Daniel De Marchi %A Rishabh Agarwal %A Michael Rene Kosorok %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-zitovsky23a %I PMLR %P 43369--43406 %U https://proceedings.mlr.press/v202/zitovsky23a.html %V 202 %X Offline model selection (OMS), that is, choosing the best policy from a set of many policies given only logged data, is crucial for applying offline RL in real-world settings. One idea that has been extensively explored is to select policies based on the mean squared Bellman error (MSBE) of the associated Q-functions. However, previous work has struggled to obtain adequate OMS performance with Bellman errors, leading many researchers to abandon the idea. To this end, we elucidate why previous work has seen pessimistic results with Bellman errors and identify conditions under which OMS algorithms based on Bellman errors will perform well. Moreover, we develop a new estimator of the MSBE that is more accurate than prior methods. Our estimator obtains impressive OMS performance on diverse discrete control tasks, including Atari games.
APA
Zitovsky, J.P., De Marchi, D., Agarwal, R. & Kosorok, M.R.. (2023). Revisiting Bellman Errors for Offline Model Selection. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:43369-43406 Available from https://proceedings.mlr.press/v202/zitovsky23a.html.

Related Material