Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning

Stephanie Milani, Nicholay Topin, Brandon Houghton, William H. Guss, Sharada P. Mohanty, Keisuke Nakata, Oriol Vinyals, Noboru Sean Kuno
Proceedings of the NeurIPS 2019 Competition and Demonstration Track, PMLR 123:203-214, 2020.

Abstract

To facilitate research in the direction of sample efficient reinforcement learning, we held the MineRL Competition on Sample Efficient Reinforcement Learning Using Human Priors at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). The primary goal of this competition was to promote the development of algorithms that use human demonstrations alongside reinforcement learning to reduce the number of samples needed to solve complex, hierarchical, and sparse environments. We describe the competition, outlining the primary challenge, the competition design, and the resources that we provided to the participants. We provide an overview of the top solutions, each of which use deep reinforcement learning and/or imitation learning. We also discuss the impact of our organizational decisions on the competition and future directions for improvement.

Cite this Paper


BibTeX
@InProceedings{pmlr-v123-milani20a, title = {Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning}, author = {Milani, Stephanie and Topin, Nicholay and Houghton, Brandon and Guss, William H. and Mohanty, Sharada P. and Nakata, Keisuke and Vinyals, Oriol and Kuno, Noboru Sean}, booktitle = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track}, pages = {203--214}, year = {2020}, editor = {Escalante, Hugo Jair and Hadsell, Raia}, volume = {123}, series = {Proceedings of Machine Learning Research}, month = {08--14 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v123/milani20a/milani20a.pdf}, url = {https://proceedings.mlr.press/v123/milani20a.html}, abstract = {To facilitate research in the direction of sample efficient reinforcement learning, we held the MineRL Competition on Sample Efficient Reinforcement Learning Using Human Priors at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). The primary goal of this competition was to promote the development of algorithms that use human demonstrations alongside reinforcement learning to reduce the number of samples needed to solve complex, hierarchical, and sparse environments. We describe the competition, outlining the primary challenge, the competition design, and the resources that we provided to the participants. We provide an overview of the top solutions, each of which use deep reinforcement learning and/or imitation learning. We also discuss the impact of our organizational decisions on the competition and future directions for improvement.} }
Endnote
%0 Conference Paper %T Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning %A Stephanie Milani %A Nicholay Topin %A Brandon Houghton %A William H. Guss %A Sharada P. Mohanty %A Keisuke Nakata %A Oriol Vinyals %A Noboru Sean Kuno %B Proceedings of the NeurIPS 2019 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2020 %E Hugo Jair Escalante %E Raia Hadsell %F pmlr-v123-milani20a %I PMLR %P 203--214 %U https://proceedings.mlr.press/v123/milani20a.html %V 123 %X To facilitate research in the direction of sample efficient reinforcement learning, we held the MineRL Competition on Sample Efficient Reinforcement Learning Using Human Priors at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). The primary goal of this competition was to promote the development of algorithms that use human demonstrations alongside reinforcement learning to reduce the number of samples needed to solve complex, hierarchical, and sparse environments. We describe the competition, outlining the primary challenge, the competition design, and the resources that we provided to the participants. We provide an overview of the top solutions, each of which use deep reinforcement learning and/or imitation learning. We also discuss the impact of our organizational decisions on the competition and future directions for improvement.
APA
Milani, S., Topin, N., Houghton, B., Guss, W.H., Mohanty, S.P., Nakata, K., Vinyals, O. & Kuno, N.S.. (2020). Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, in Proceedings of Machine Learning Research 123:203-214 Available from https://proceedings.mlr.press/v123/milani20a.html.

Related Material