[edit]
A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots
Proceedings of the Conference on Robot Learning, PMLR 100:466-489, 2020.
Abstract
As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms therein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms’ intrinsic variance, the environments’ stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.