{"title":"自动驾驶中框架与事件相机的融合技术综述","authors":"Peijun Shi, Chee-Onn Chow, Wei Ru Wong","doi":"10.1016/j.inffus.2025.103697","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancement of autonomous driving technology demands robust environmental perception systems capable of operating under extreme illumination variations, high-speed motion, and adverse weather conditions. While conventional frame-based cameras offer rich spatial and textural information, they suffer from fixed frame rates and limited dynamic range. Event cameras, as neuromorphic vision sensors, provide unique advantages in temporal resolution, dynamic range, and power efficiency through asynchronous pixel-level brightness change detection. This paper presents the first comprehensive review of frame-event camera fusion technology for autonomous driving applications. This paper establishes a fusion framework tailored to autonomous driving perception requirements and analyzes the complementary characteristics of both sensor modalities in addressing critical perception challenges. This paper proposes the first systematic classification of frame-event fusion architectures, covering multi-level approaches from data-level to decision-level integration, while tracing the technical evolution of fusion strategies. Additionally, this paper constructs a dataset evaluation framework for autonomous driving tasks, providing systematic benchmark selection guidance. Through detailed analysis of deployment challenges, this paper identifies key technical barriers including temporal synchronization, computational efficiency, and cross-modal calibration, alongside corresponding solutions. Finally, this paper presents perspectives on emerging paradigms and future directions, providing essential references for advancing practical frame-event fusion applications in autonomous driving.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103697"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fusion techniques of frame and event cameras in autonomous driving: A review\",\"authors\":\"Peijun Shi, Chee-Onn Chow, Wei Ru Wong\",\"doi\":\"10.1016/j.inffus.2025.103697\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid advancement of autonomous driving technology demands robust environmental perception systems capable of operating under extreme illumination variations, high-speed motion, and adverse weather conditions. While conventional frame-based cameras offer rich spatial and textural information, they suffer from fixed frame rates and limited dynamic range. Event cameras, as neuromorphic vision sensors, provide unique advantages in temporal resolution, dynamic range, and power efficiency through asynchronous pixel-level brightness change detection. This paper presents the first comprehensive review of frame-event camera fusion technology for autonomous driving applications. This paper establishes a fusion framework tailored to autonomous driving perception requirements and analyzes the complementary characteristics of both sensor modalities in addressing critical perception challenges. This paper proposes the first systematic classification of frame-event fusion architectures, covering multi-level approaches from data-level to decision-level integration, while tracing the technical evolution of fusion strategies. Additionally, this paper constructs a dataset evaluation framework for autonomous driving tasks, providing systematic benchmark selection guidance. Through detailed analysis of deployment challenges, this paper identifies key technical barriers including temporal synchronization, computational efficiency, and cross-modal calibration, alongside corresponding solutions. Finally, this paper presents perspectives on emerging paradigms and future directions, providing essential references for advancing practical frame-event fusion applications in autonomous driving.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"127 \",\"pages\":\"Article 103697\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525007699\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525007699","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Fusion techniques of frame and event cameras in autonomous driving: A review
The rapid advancement of autonomous driving technology demands robust environmental perception systems capable of operating under extreme illumination variations, high-speed motion, and adverse weather conditions. While conventional frame-based cameras offer rich spatial and textural information, they suffer from fixed frame rates and limited dynamic range. Event cameras, as neuromorphic vision sensors, provide unique advantages in temporal resolution, dynamic range, and power efficiency through asynchronous pixel-level brightness change detection. This paper presents the first comprehensive review of frame-event camera fusion technology for autonomous driving applications. This paper establishes a fusion framework tailored to autonomous driving perception requirements and analyzes the complementary characteristics of both sensor modalities in addressing critical perception challenges. This paper proposes the first systematic classification of frame-event fusion architectures, covering multi-level approaches from data-level to decision-level integration, while tracing the technical evolution of fusion strategies. Additionally, this paper constructs a dataset evaluation framework for autonomous driving tasks, providing systematic benchmark selection guidance. Through detailed analysis of deployment challenges, this paper identifies key technical barriers including temporal synchronization, computational efficiency, and cross-modal calibration, alongside corresponding solutions. Finally, this paper presents perspectives on emerging paradigms and future directions, providing essential references for advancing practical frame-event fusion applications in autonomous driving.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.