自动驾驶中框架与事件相机的融合技术综述

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Peijun Shi, Chee-Onn Chow, Wei Ru Wong
{"title":"自动驾驶中框架与事件相机的融合技术综述","authors":"Peijun Shi,&nbsp;Chee-Onn Chow,&nbsp;Wei Ru Wong","doi":"10.1016/j.inffus.2025.103697","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancement of autonomous driving technology demands robust environmental perception systems capable of operating under extreme illumination variations, high-speed motion, and adverse weather conditions. While conventional frame-based cameras offer rich spatial and textural information, they suffer from fixed frame rates and limited dynamic range. Event cameras, as neuromorphic vision sensors, provide unique advantages in temporal resolution, dynamic range, and power efficiency through asynchronous pixel-level brightness change detection. This paper presents the first comprehensive review of frame-event camera fusion technology for autonomous driving applications. This paper establishes a fusion framework tailored to autonomous driving perception requirements and analyzes the complementary characteristics of both sensor modalities in addressing critical perception challenges. This paper proposes the first systematic classification of frame-event fusion architectures, covering multi-level approaches from data-level to decision-level integration, while tracing the technical evolution of fusion strategies. Additionally, this paper constructs a dataset evaluation framework for autonomous driving tasks, providing systematic benchmark selection guidance. Through detailed analysis of deployment challenges, this paper identifies key technical barriers including temporal synchronization, computational efficiency, and cross-modal calibration, alongside corresponding solutions. Finally, this paper presents perspectives on emerging paradigms and future directions, providing essential references for advancing practical frame-event fusion applications in autonomous driving.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103697"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fusion techniques of frame and event cameras in autonomous driving: A review\",\"authors\":\"Peijun Shi,&nbsp;Chee-Onn Chow,&nbsp;Wei Ru Wong\",\"doi\":\"10.1016/j.inffus.2025.103697\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid advancement of autonomous driving technology demands robust environmental perception systems capable of operating under extreme illumination variations, high-speed motion, and adverse weather conditions. While conventional frame-based cameras offer rich spatial and textural information, they suffer from fixed frame rates and limited dynamic range. Event cameras, as neuromorphic vision sensors, provide unique advantages in temporal resolution, dynamic range, and power efficiency through asynchronous pixel-level brightness change detection. This paper presents the first comprehensive review of frame-event camera fusion technology for autonomous driving applications. This paper establishes a fusion framework tailored to autonomous driving perception requirements and analyzes the complementary characteristics of both sensor modalities in addressing critical perception challenges. This paper proposes the first systematic classification of frame-event fusion architectures, covering multi-level approaches from data-level to decision-level integration, while tracing the technical evolution of fusion strategies. Additionally, this paper constructs a dataset evaluation framework for autonomous driving tasks, providing systematic benchmark selection guidance. Through detailed analysis of deployment challenges, this paper identifies key technical barriers including temporal synchronization, computational efficiency, and cross-modal calibration, alongside corresponding solutions. Finally, this paper presents perspectives on emerging paradigms and future directions, providing essential references for advancing practical frame-event fusion applications in autonomous driving.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"127 \",\"pages\":\"Article 103697\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525007699\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525007699","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

自动驾驶技术的快速发展需要强大的环境感知系统,能够在极端光照变化、高速运动和恶劣天气条件下运行。虽然传统的基于帧的相机提供丰富的空间和纹理信息,但它们受到固定帧率和有限动态范围的影响。事件相机作为神经形态视觉传感器,通过异步像素级亮度变化检测,在时间分辨率、动态范围和功率效率方面具有独特优势。本文首次全面综述了自动驾驶应用中的帧-事件相机融合技术。本文建立了一个适合自动驾驶感知需求的融合框架,并分析了两种传感器模式在解决关键感知挑战方面的互补特性。本文首次对框架-事件融合体系结构进行了系统分类,涵盖了从数据级到决策级的多层次融合方法,同时对融合策略的技术演变进行了跟踪。此外,本文构建了自动驾驶任务的数据集评估框架,为自动驾驶任务的基准选择提供了系统的指导。通过对部署挑战的详细分析,本文确定了包括时间同步、计算效率和跨模态校准在内的关键技术障碍,以及相应的解决方案。最后,对框架-事件融合技术在自动驾驶领域的应用前景进行了展望,为框架-事件融合技术在自动驾驶领域的实际应用提供参考。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fusion techniques of frame and event cameras in autonomous driving: A review
The rapid advancement of autonomous driving technology demands robust environmental perception systems capable of operating under extreme illumination variations, high-speed motion, and adverse weather conditions. While conventional frame-based cameras offer rich spatial and textural information, they suffer from fixed frame rates and limited dynamic range. Event cameras, as neuromorphic vision sensors, provide unique advantages in temporal resolution, dynamic range, and power efficiency through asynchronous pixel-level brightness change detection. This paper presents the first comprehensive review of frame-event camera fusion technology for autonomous driving applications. This paper establishes a fusion framework tailored to autonomous driving perception requirements and analyzes the complementary characteristics of both sensor modalities in addressing critical perception challenges. This paper proposes the first systematic classification of frame-event fusion architectures, covering multi-level approaches from data-level to decision-level integration, while tracing the technical evolution of fusion strategies. Additionally, this paper constructs a dataset evaluation framework for autonomous driving tasks, providing systematic benchmark selection guidance. Through detailed analysis of deployment challenges, this paper identifies key technical barriers including temporal synchronization, computational efficiency, and cross-modal calibration, alongside corresponding solutions. Finally, this paper presents perspectives on emerging paradigms and future directions, providing essential references for advancing practical frame-event fusion applications in autonomous driving.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信