F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning

Matthew O’Kelly, Hongrui Zheng, Dhruv Karthik, Rahul Mangharam
Proceedings of the NeurIPS 2019 Competition and Demonstration Track, PMLR 123:77-89, 2020.

Abstract

The deployment and evaluation of learning algorithms on autonomous vehicles (AV) is expensive, slow, and potentially unsafe. This paper details the F1TENTH autonomous racing platform, an open-source evaluation framework for training, testing, and evaluating autonomous systems. With 1/10th-scale low-cost hardware and multiple virtual environments, F1TENTH enables safe and rapid experimentation of AV algorithms even in laboratory research settings. We present three benchmark tasks and baselines in the setting of autonomous racing, demonstrating the flexibility and features of our evaluation environment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v123-o-kelly20a, title = {F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning}, author = {O'Kelly, Matthew and Zheng, Hongrui and Karthik, Dhruv and Mangharam, Rahul}, booktitle = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track}, pages = {77--89}, year = {2020}, editor = {Escalante, Hugo Jair and Hadsell, Raia}, volume = {123}, series = {Proceedings of Machine Learning Research}, month = {08--14 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v123/o-kelly20a/o-kelly20a.pdf}, url = {https://proceedings.mlr.press/v123/o-kelly20a.html}, abstract = {The deployment and evaluation of learning algorithms on autonomous vehicles (AV) is expensive, slow, and potentially unsafe. This paper details the F1TENTH autonomous racing platform, an open-source evaluation framework for training, testing, and evaluating autonomous systems. With 1/10th-scale low-cost hardware and multiple virtual environments, F1TENTH enables safe and rapid experimentation of AV algorithms even in laboratory research settings. We present three benchmark tasks and baselines in the setting of autonomous racing, demonstrating the flexibility and features of our evaluation environment.} }
Endnote
%0 Conference Paper %T F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning %A Matthew O’Kelly %A Hongrui Zheng %A Dhruv Karthik %A Rahul Mangharam %B Proceedings of the NeurIPS 2019 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2020 %E Hugo Jair Escalante %E Raia Hadsell %F pmlr-v123-o-kelly20a %I PMLR %P 77--89 %U https://proceedings.mlr.press/v123/o-kelly20a.html %V 123 %X The deployment and evaluation of learning algorithms on autonomous vehicles (AV) is expensive, slow, and potentially unsafe. This paper details the F1TENTH autonomous racing platform, an open-source evaluation framework for training, testing, and evaluating autonomous systems. With 1/10th-scale low-cost hardware and multiple virtual environments, F1TENTH enables safe and rapid experimentation of AV algorithms even in laboratory research settings. We present three benchmark tasks and baselines in the setting of autonomous racing, demonstrating the flexibility and features of our evaluation environment.
APA
O’Kelly, M., Zheng, H., Karthik, D. & Mangharam, R.. (2020). F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, in Proceedings of Machine Learning Research 123:77-89 Available from https://proceedings.mlr.press/v123/o-kelly20a.html.

Related Material