Position: Benchmarking is Limited in Reinforcement Learning Research

Scott M. Jordan, Adam White, Bruno Castro Da Silva, Martha White, Philip S. Thomas
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:22551-22569, 2024.

Abstract

Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-jordan24a, title = {Position: Benchmarking is Limited in Reinforcement Learning Research}, author = {Jordan, Scott M. and White, Adam and Silva, Bruno Castro Da and White, Martha and Thomas, Philip S.}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {22551--22569}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/jordan24a/jordan24a.pdf}, url = {https://proceedings.mlr.press/v235/jordan24a.html}, abstract = {Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.} }
Endnote
%0 Conference Paper %T Position: Benchmarking is Limited in Reinforcement Learning Research %A Scott M. Jordan %A Adam White %A Bruno Castro Da Silva %A Martha White %A Philip S. Thomas %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-jordan24a %I PMLR %P 22551--22569 %U https://proceedings.mlr.press/v235/jordan24a.html %V 235 %X Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.
APA
Jordan, S.M., White, A., Silva, B.C.D., White, M. & Thomas, P.S.. (2024). Position: Benchmarking is Limited in Reinforcement Learning Research. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:22551-22569 Available from https://proceedings.mlr.press/v235/jordan24a.html.

Related Material