自动驾驶中融合毫米波雷达和视觉的三维目标检测深度增强网络

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Wenxiang Wang;Jianping Han;Zhongmin Jiang;Zhiyuan Zhou;Yingxiao Wu
{"title":"自动驾驶中融合毫米波雷达和视觉的三维目标检测深度增强网络","authors":"Wenxiang Wang;Jianping Han;Zhongmin Jiang;Zhiyuan Zhou;Yingxiao Wu","doi":"10.1109/JIOT.2025.3525899","DOIUrl":null,"url":null,"abstract":"In the realm of autonomous driving, precise and robust 3-D perception is paramount. Multimodal fusion for 3-D object detection is crucial for improving accuracy, generalization, and robustness in autonomous driving. In this article, we introduce the depth enhancement network (DEN), an innovative camera-radar fusion framework that generates an accurate depth estimation for 3-D object detection. To overcome the limitations caused by the lack of spatial information in an image, DEN estimates image depth using accurate radar points. Furthermore, to extract more comprehensive and fine-grained scene depth information, we present an innovative label optimization strategy (LOS) that enhances label density and quality. DEN achieves an 18.78% reduction in mean absolute error (MAE) and a 12.8% decrease in root mean-square error (RMSE) for depth estimation. Additionally, it improves 3-D object detection accuracy by 0.8% compared to the baseline model. Under low visibility conditions, DEN demonstrates a 6.7% reduction in MAE and a 9.6% reduction in RMSE compared to the baseline. These improvements demonstrated its robustness and enhanced performance under challenging conditions.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 10","pages":"14420-14430"},"PeriodicalIF":8.9000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DEN: Depth Enhancement Network for 3-D Object Detection With the Fusion of mmWave Radar and Vision in Autonomous Driving\",\"authors\":\"Wenxiang Wang;Jianping Han;Zhongmin Jiang;Zhiyuan Zhou;Yingxiao Wu\",\"doi\":\"10.1109/JIOT.2025.3525899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the realm of autonomous driving, precise and robust 3-D perception is paramount. Multimodal fusion for 3-D object detection is crucial for improving accuracy, generalization, and robustness in autonomous driving. In this article, we introduce the depth enhancement network (DEN), an innovative camera-radar fusion framework that generates an accurate depth estimation for 3-D object detection. To overcome the limitations caused by the lack of spatial information in an image, DEN estimates image depth using accurate radar points. Furthermore, to extract more comprehensive and fine-grained scene depth information, we present an innovative label optimization strategy (LOS) that enhances label density and quality. DEN achieves an 18.78% reduction in mean absolute error (MAE) and a 12.8% decrease in root mean-square error (RMSE) for depth estimation. Additionally, it improves 3-D object detection accuracy by 0.8% compared to the baseline model. Under low visibility conditions, DEN demonstrates a 6.7% reduction in MAE and a 9.6% reduction in RMSE compared to the baseline. These improvements demonstrated its robustness and enhanced performance under challenging conditions.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 10\",\"pages\":\"14420-14430\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10824838/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10824838/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在自动驾驶领域,精确而强大的3d感知至关重要。多模态融合三维目标检测对于提高自动驾驶的精度、泛化和鲁棒性至关重要。在本文中,我们介绍了深度增强网络(DEN),这是一种创新的相机-雷达融合框架,可以为三维目标检测生成准确的深度估计。为了克服图像中缺乏空间信息造成的限制,DEN使用精确的雷达点估计图像深度。此外,为了提取更全面、更细粒度的场景深度信息,我们提出了一种创新的标签优化策略(LOS),该策略可以提高标签密度和质量。DEN实现了深度估计的平均绝对误差(MAE)减少18.78%,均方根误差(RMSE)减少12.8%。此外,与基线模型相比,它将3d目标检测精度提高了0.8%。在低能见度条件下,与基线相比,DEN显示MAE降低了6.7%,RMSE降低了9.6%。这些改进证明了它在具有挑战性的条件下的鲁棒性和增强的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DEN: Depth Enhancement Network for 3-D Object Detection With the Fusion of mmWave Radar and Vision in Autonomous Driving
In the realm of autonomous driving, precise and robust 3-D perception is paramount. Multimodal fusion for 3-D object detection is crucial for improving accuracy, generalization, and robustness in autonomous driving. In this article, we introduce the depth enhancement network (DEN), an innovative camera-radar fusion framework that generates an accurate depth estimation for 3-D object detection. To overcome the limitations caused by the lack of spatial information in an image, DEN estimates image depth using accurate radar points. Furthermore, to extract more comprehensive and fine-grained scene depth information, we present an innovative label optimization strategy (LOS) that enhances label density and quality. DEN achieves an 18.78% reduction in mean absolute error (MAE) and a 12.8% decrease in root mean-square error (RMSE) for depth estimation. Additionally, it improves 3-D object detection accuracy by 0.8% compared to the baseline model. Under low visibility conditions, DEN demonstrates a 6.7% reduction in MAE and a 9.6% reduction in RMSE compared to the baseline. These improvements demonstrated its robustness and enhanced performance under challenging conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Internet of Things Journal
IEEE Internet of Things Journal Computer Science-Information Systems
CiteScore
17.60
自引率
13.20%
发文量
1982
期刊介绍: The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信