CRRFNet: An adaptive traffic object detection method based on camera and radar radio frequency fusion

IF 7.6 1区 工程技术 Q1 TRANSPORTATION SCIENCE & TECHNOLOGY
Wenbo Wang, Weibin Zhang
{"title":"CRRFNet: An adaptive traffic object detection method based on camera and radar radio frequency fusion","authors":"Wenbo Wang,&nbsp;Weibin Zhang","doi":"10.1016/j.trc.2024.104791","DOIUrl":null,"url":null,"abstract":"<div><p>A large number of studies have proved that camera and radar fusion is a useful and economical solution for traffic object detection. However, how to improve the reliability and robustness of fusion methods is still a huge challenge. In this paper, an adaptive traffic object detection method based on a camera and radar radio frequency Network (CRRFNet) is proposed, to solve the problem of robust and reliable traffic object detection in noisy or abnormal scenes. Firstly, two different deep convolution modules are designed for extracting features from the camera and radar; Secondly, the camera and radar features are catenated, and a deconvolution module is built for upsampling; Thirdly, the heatmap module is used to compress redundant channels. Finally, the objects in the Field of View (FoV) are predicted by location-based Non-Maximum Suppression (L-NMS). In addition, a data scrambling technique is proposed to alleviate the problem of overfitting to a single sensor by the fusion method. The existing Washington University Camera Radar (CRUW) dataset is improved and a new dataset named Camera-Radar Nanjing University of Science and Technology Version 1.0 (CRNJUST-v1.0) is collected to verify the proposed method. Experiments show that CRRFNet can detect objects by using the information of radar and camera at the same time, which is far more accurate than a single sensor method. Combined with the proposed data scrambling technology, CRRFNet shows excellent robustness that can effectively detect objects in the case of interference or single sensor failure.</p></div>","PeriodicalId":54417,"journal":{"name":"Transportation Research Part C-Emerging Technologies","volume":null,"pages":null},"PeriodicalIF":7.6000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part C-Emerging Technologies","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0968090X24003127","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

A large number of studies have proved that camera and radar fusion is a useful and economical solution for traffic object detection. However, how to improve the reliability and robustness of fusion methods is still a huge challenge. In this paper, an adaptive traffic object detection method based on a camera and radar radio frequency Network (CRRFNet) is proposed, to solve the problem of robust and reliable traffic object detection in noisy or abnormal scenes. Firstly, two different deep convolution modules are designed for extracting features from the camera and radar; Secondly, the camera and radar features are catenated, and a deconvolution module is built for upsampling; Thirdly, the heatmap module is used to compress redundant channels. Finally, the objects in the Field of View (FoV) are predicted by location-based Non-Maximum Suppression (L-NMS). In addition, a data scrambling technique is proposed to alleviate the problem of overfitting to a single sensor by the fusion method. The existing Washington University Camera Radar (CRUW) dataset is improved and a new dataset named Camera-Radar Nanjing University of Science and Technology Version 1.0 (CRNJUST-v1.0) is collected to verify the proposed method. Experiments show that CRRFNet can detect objects by using the information of radar and camera at the same time, which is far more accurate than a single sensor method. Combined with the proposed data scrambling technology, CRRFNet shows excellent robustness that can effectively detect objects in the case of interference or single sensor failure.

CRRFNet:基于摄像头和雷达射频融合的自适应交通目标检测方法
大量研究已经证明,摄像头与雷达融合是一种有用且经济的交通目标检测解决方案。然而,如何提高融合方法的可靠性和鲁棒性仍然是一个巨大的挑战。本文提出了一种基于摄像头和雷达射频网络(CRRFNet)的自适应交通目标检测方法,以解决噪声或异常场景下鲁棒性和可靠性的交通目标检测问题。首先,设计了两种不同的深度卷积模块,用于提取摄像头和雷达的特征;其次,对摄像头和雷达的特征进行分类,并建立解卷积模块进行上采样;第三,使用热图模块压缩冗余信道。最后,通过基于位置的非最大值抑制(L-NMS)来预测视场(FoV)中的物体。此外,还提出了一种数据扰乱技术,以减轻融合方法对单一传感器的过度拟合问题。为了验证所提出的方法,对现有的华盛顿大学摄像雷达(CRUW)数据集进行了改进,并收集了一个名为南京理工大学摄像雷达 1.0 版(CRNJUST-v1.0)的新数据集。实验表明,CRRFNet 可以同时利用雷达和摄像头的信息来检测物体,其准确性远高于单一传感器方法。结合所提出的数据加扰技术,CRRFNet 显示出卓越的鲁棒性,能够在干扰或单传感器失效的情况下有效地检测物体。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
15.80
自引率
12.00%
发文量
332
审稿时长
64 days
期刊介绍: Transportation Research: Part C (TR_C) is dedicated to showcasing high-quality, scholarly research that delves into the development, applications, and implications of transportation systems and emerging technologies. Our focus lies not solely on individual technologies, but rather on their broader implications for the planning, design, operation, control, maintenance, and rehabilitation of transportation systems, services, and components. In essence, the intellectual core of the journal revolves around the transportation aspect rather than the technology itself. We actively encourage the integration of quantitative methods from diverse fields such as operations research, control systems, complex networks, computer science, and artificial intelligence. Join us in exploring the intersection of transportation systems and emerging technologies to drive innovation and progress in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信