Deep Traffic Benchmark: Aerial Perception and Driven Behavior Dataset

Guoxing Zhang, Qiuping Li, Yiming Liu, Zhanpeng Wang, Yuanqi Chen, Wenrui Cai, Weiye Zhang, Bingting Guo, Zhi Zeng, Jiasong Zhu
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1670-1682, 2024.

Abstract

Predicting human driving behavior has always been an important area of autonomous driving research. Existing data on autonomous driving in this area is limited in both perspective and duration. For example, vehicles may block each other on the road, although data from vehicles behind them is useful for research. In addition, driving in this area is constrained by the road environment, and the host vehicle cannot observe the designated area for an extended period of time. To investigate the potential relationship between human driving behavior and traffic conditions, we provide a drone-collected video dataset, Deep Traffic, that includes: (1) aerial footage from a vertical perspective, (2) image and annotation capture for training vehicle destination detection and semantic segmentation model, (3) high-definition map data of the captured area, (4) development scripts for various features. Deep Traffic is the largest and most comprehensive dataset to date, covering both urban and high-speed areas. We believe that this benchmark dataset will greatly facilitate the development of drones to monitor traffic flow and study human driver behavior, and that the capacity of the traffic system is of great importance. All datasets and pre-training results can be downloaded from github project.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-zhang24d, title = {Deep Traffic Benchmark: {A}erial Perception and Driven Behavior Dataset}, author = {Zhang, Guoxing and Li, Qiuping and Liu, Yiming and Wang, Zhanpeng and Chen, Yuanqi and Cai, Wenrui and Zhang, Weiye and Guo, Bingting and Zeng, Zhi and Zhu, Jiasong}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {1670--1682}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/zhang24d/zhang24d.pdf}, url = {https://proceedings.mlr.press/v222/zhang24d.html}, abstract = {Predicting human driving behavior has always been an important area of autonomous driving research. Existing data on autonomous driving in this area is limited in both perspective and duration. For example, vehicles may block each other on the road, although data from vehicles behind them is useful for research. In addition, driving in this area is constrained by the road environment, and the host vehicle cannot observe the designated area for an extended period of time. To investigate the potential relationship between human driving behavior and traffic conditions, we provide a drone-collected video dataset, Deep Traffic, that includes: (1) aerial footage from a vertical perspective, (2) image and annotation capture for training vehicle destination detection and semantic segmentation model, (3) high-definition map data of the captured area, (4) development scripts for various features. Deep Traffic is the largest and most comprehensive dataset to date, covering both urban and high-speed areas. We believe that this benchmark dataset will greatly facilitate the development of drones to monitor traffic flow and study human driver behavior, and that the capacity of the traffic system is of great importance. All datasets and pre-training results can be downloaded from github project.} }
Endnote
%0 Conference Paper %T Deep Traffic Benchmark: Aerial Perception and Driven Behavior Dataset %A Guoxing Zhang %A Qiuping Li %A Yiming Liu %A Zhanpeng Wang %A Yuanqi Chen %A Wenrui Cai %A Weiye Zhang %A Bingting Guo %A Zhi Zeng %A Jiasong Zhu %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-zhang24d %I PMLR %P 1670--1682 %U https://proceedings.mlr.press/v222/zhang24d.html %V 222 %X Predicting human driving behavior has always been an important area of autonomous driving research. Existing data on autonomous driving in this area is limited in both perspective and duration. For example, vehicles may block each other on the road, although data from vehicles behind them is useful for research. In addition, driving in this area is constrained by the road environment, and the host vehicle cannot observe the designated area for an extended period of time. To investigate the potential relationship between human driving behavior and traffic conditions, we provide a drone-collected video dataset, Deep Traffic, that includes: (1) aerial footage from a vertical perspective, (2) image and annotation capture for training vehicle destination detection and semantic segmentation model, (3) high-definition map data of the captured area, (4) development scripts for various features. Deep Traffic is the largest and most comprehensive dataset to date, covering both urban and high-speed areas. We believe that this benchmark dataset will greatly facilitate the development of drones to monitor traffic flow and study human driver behavior, and that the capacity of the traffic system is of great importance. All datasets and pre-training results can be downloaded from github project.
APA
Zhang, G., Li, Q., Liu, Y., Wang, Z., Chen, Y., Cai, W., Zhang, W., Guo, B., Zeng, Z. & Zhu, J.. (2024). Deep Traffic Benchmark: Aerial Perception and Driven Behavior Dataset. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:1670-1682 Available from https://proceedings.mlr.press/v222/zhang24d.html.

Related Material