Radar Artifact Labeling Framework (RALF): Method for Plausible Radar Detections in Datasets

Simon T. Isele, M. Schilling, Fabian E. Klein, S. Saralajew, Johann Marius Zöllner
{"title":"Radar Artifact Labeling Framework (RALF): Method for Plausible Radar Detections in Datasets","authors":"Simon T. Isele, M. Schilling, Fabian E. Klein, S. Saralajew, Johann Marius Zöllner","doi":"10.5220/0010395100220033","DOIUrl":null,"url":null,"abstract":"Research on localization and perception for Autonomous Driving is mainly focused on camera and LiDAR datasets, rarely on radar data. Manually labeling sparse radar point clouds is challenging. For a dataset generation, we propose the cross sensor Radar Artifact Labeling Framework (RALF). Automatically generated labels for automotive radar data help to cure radar shortcomings like artifacts for the application of artificial intelligence. RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets. The optical evaluation backbone consists of a generalized monocular depth image estimation of surround view cameras plus LiDAR scans. Modern car sensor sets of cameras and LiDAR allow to calibrate image-based relative depth information in overlapping sensing areas. K-Nearest Neighbors matching relates the optical perception point cloud with raw radar detections. In parallel, a temporal tracking evaluation part considers the radar detections' transient behavior. Based on the distance between matches, respecting both sensor and model uncertainties, we propose a plausibility rating of every radar detection. We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28\\cdot10^6$ points. Besides generating plausible radar detections, the framework enables further labeled low-level radar signal datasets for applications of perception and Autonomous Driving learning tasks.","PeriodicalId":218840,"journal":{"name":"International Conference on Vehicle Technology and Intelligent Transport Systems","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Vehicle Technology and Intelligent Transport Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0010395100220033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Research on localization and perception for Autonomous Driving is mainly focused on camera and LiDAR datasets, rarely on radar data. Manually labeling sparse radar point clouds is challenging. For a dataset generation, we propose the cross sensor Radar Artifact Labeling Framework (RALF). Automatically generated labels for automotive radar data help to cure radar shortcomings like artifacts for the application of artificial intelligence. RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets. The optical evaluation backbone consists of a generalized monocular depth image estimation of surround view cameras plus LiDAR scans. Modern car sensor sets of cameras and LiDAR allow to calibrate image-based relative depth information in overlapping sensing areas. K-Nearest Neighbors matching relates the optical perception point cloud with raw radar detections. In parallel, a temporal tracking evaluation part considers the radar detections' transient behavior. Based on the distance between matches, respecting both sensor and model uncertainties, we propose a plausibility rating of every radar detection. We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28\cdot10^6$ points. Besides generating plausible radar detections, the framework enables further labeled low-level radar signal datasets for applications of perception and Autonomous Driving learning tasks.
雷达伪迹标记框架(RALF):数据集中似是而非的雷达检测方法
自动驾驶定位与感知的研究主要集中在摄像头和LiDAR数据集上,很少涉及雷达数据。手动标记稀疏雷达点云具有挑战性。对于数据集生成,我们提出了跨传感器雷达伪迹标记框架(RALF)。自动生成的汽车雷达数据标签有助于解决雷达的缺点,如人工智能应用中的人工制品。RALF为雷达原始探测提供可信性标签,区分伪迹和目标。光学评估骨干由环视相机的广义单目深度图像估计和激光雷达扫描组成。现代汽车传感器组的摄像头和激光雷达允许在重叠的传感区域校准基于图像的相对深度信息。k近邻匹配将光学感知点云与原始雷达探测相关联。同时,时域跟踪评估部分考虑了雷达探测的瞬态行为。基于匹配之间的距离,同时考虑传感器和模型的不确定性,我们提出了每个雷达探测的可信度评级。我们通过在$3.28\cdot10^6$点的半手动标记的地面真值数据集上评估误差度量来验证结果。除了生成可信的雷达探测外,该框架还可以进一步标记低水平雷达信号数据集,用于感知和自动驾驶学习任务的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信