Xinyu Zhang, Li Wang, Jian Chen, Cheng Fang, Guangqi Yang, Yichen Wang, Lei Yang, Ziying Song, Lin Liu, Xiaofei Zhang, Bin Xu, Zhiwei Li, Qingshan Yang, Jun Li, Zhenlin Zhang, Weida Wang, Shuzhi Sam Ge
{"title":"Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autononous Driving.","authors":"Xinyu Zhang, Li Wang, Jian Chen, Cheng Fang, Guangqi Yang, Yichen Wang, Lei Yang, Ziying Song, Lin Liu, Xiaofei Zhang, Bin Xu, Zhiwei Li, Qingshan Yang, Jun Li, Zhenlin Zhang, Weida Wang, Shuzhi Sam Ge","doi":"10.1038/s41597-025-04698-2","DOIUrl":null,"url":null,"abstract":"<p><p>4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, 4D radar is more noisy than LiDAR and requires different filtering strategies that affect the point cloud density and noise level. Comparative analyses of different point cloud densities and noise levels are still lacking, mainly because the available datasets use only one type of 4D radar, making it difficult to compare different 4D radars in the same scenario. We introduce a novel large-scale multi-modal dataset that captures both types of 4D radar, consisting of 151 sequences, most of which are 20 seconds long and contain 10,007 synchronized and annotated frames. Our dataset captures a variety of challenging driving scenarios, including multiple road conditions, weather conditions, different lighting intensities and periods. It supports 3D object detection and tracking as well as multi-modal tasks. We experimentally validate the dataset, providing valuable insights for studying different types of 4D radar.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"439"},"PeriodicalIF":5.8000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11907064/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scientific Data","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41597-025-04698-2","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
4D radar has higher point cloud density and precise vertical resolution than conventional 3D radar, making it promising for adverse scenarios in the environmental perception of autonomous driving. However, 4D radar is more noisy than LiDAR and requires different filtering strategies that affect the point cloud density and noise level. Comparative analyses of different point cloud densities and noise levels are still lacking, mainly because the available datasets use only one type of 4D radar, making it difficult to compare different 4D radars in the same scenario. We introduce a novel large-scale multi-modal dataset that captures both types of 4D radar, consisting of 151 sequences, most of which are 20 seconds long and contain 10,007 synchronized and annotated frames. Our dataset captures a variety of challenging driving scenarios, including multiple road conditions, weather conditions, different lighting intensities and periods. It supports 3D object detection and tracking as well as multi-modal tasks. We experimentally validate the dataset, providing valuable insights for studying different types of 4D radar.
期刊介绍:
Scientific Data is an open-access journal focused on data, publishing descriptions of research datasets and articles on data sharing across natural sciences, medicine, engineering, and social sciences. Its goal is to enhance the sharing and reuse of scientific data, encourage broader data sharing, and acknowledge those who share their data.
The journal primarily publishes Data Descriptors, which offer detailed descriptions of research datasets, including data collection methods and technical analyses validating data quality. These descriptors aim to facilitate data reuse rather than testing hypotheses or presenting new interpretations, methods, or in-depth analyses.