Showing Your Offline Reinforcement Learning Work: Online Evaluation Budget Matters

Vladislav Kurenkov, Sergey Kolesnikov
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11729-11752, 2022.

Abstract

In this work, we argue for the importance of an online evaluation budget for a reliable comparison of deep offline RL algorithms. First, we delineate that the online evaluation budget is problem-dependent, where some problems allow for less but others for more. And second, we demonstrate that the preference between algorithms is budget-dependent across a diverse range of decision-making domains such as Robotics, Finance, and Energy Management. Following the points above, we suggest reporting the performance of deep offline RL algorithms under varying online evaluation budgets. To facilitate this, we propose to use a reporting tool from the NLP field, Expected Validation Performance. This technique makes it possible to reliably estimate expected maximum performance under different budgets while not requiring any additional computation beyond hyperparameter search. By employing this tool, we also show that Behavioral Cloning is often more favorable to offline RL algorithms when working within a limited budget.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kurenkov22a, title = {Showing Your Offline Reinforcement Learning Work: Online Evaluation Budget Matters}, author = {Kurenkov, Vladislav and Kolesnikov, Sergey}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11729--11752}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kurenkov22a/kurenkov22a.pdf}, url = {https://proceedings.mlr.press/v162/kurenkov22a.html}, abstract = {In this work, we argue for the importance of an online evaluation budget for a reliable comparison of deep offline RL algorithms. First, we delineate that the online evaluation budget is problem-dependent, where some problems allow for less but others for more. And second, we demonstrate that the preference between algorithms is budget-dependent across a diverse range of decision-making domains such as Robotics, Finance, and Energy Management. Following the points above, we suggest reporting the performance of deep offline RL algorithms under varying online evaluation budgets. To facilitate this, we propose to use a reporting tool from the NLP field, Expected Validation Performance. This technique makes it possible to reliably estimate expected maximum performance under different budgets while not requiring any additional computation beyond hyperparameter search. By employing this tool, we also show that Behavioral Cloning is often more favorable to offline RL algorithms when working within a limited budget.} }
Endnote
%0 Conference Paper %T Showing Your Offline Reinforcement Learning Work: Online Evaluation Budget Matters %A Vladislav Kurenkov %A Sergey Kolesnikov %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kurenkov22a %I PMLR %P 11729--11752 %U https://proceedings.mlr.press/v162/kurenkov22a.html %V 162 %X In this work, we argue for the importance of an online evaluation budget for a reliable comparison of deep offline RL algorithms. First, we delineate that the online evaluation budget is problem-dependent, where some problems allow for less but others for more. And second, we demonstrate that the preference between algorithms is budget-dependent across a diverse range of decision-making domains such as Robotics, Finance, and Energy Management. Following the points above, we suggest reporting the performance of deep offline RL algorithms under varying online evaluation budgets. To facilitate this, we propose to use a reporting tool from the NLP field, Expected Validation Performance. This technique makes it possible to reliably estimate expected maximum performance under different budgets while not requiring any additional computation beyond hyperparameter search. By employing this tool, we also show that Behavioral Cloning is often more favorable to offline RL algorithms when working within a limited budget.
APA
Kurenkov, V. & Kolesnikov, S.. (2022). Showing Your Offline Reinforcement Learning Work: Online Evaluation Budget Matters. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11729-11752 Available from https://proceedings.mlr.press/v162/kurenkov22a.html.

Related Material