{"title":"基于实时多模态融合的低光环境下增强目标检测","authors":"Yuhong Wu, Jinkai Cui, Kuoye Niu, Yanlong Lu, Lijun Cheng, Shengze Cai, Chao Xu","doi":"10.1049/csy2.70011","DOIUrl":null,"url":null,"abstract":"<p>Accurate target detection in low-light environments is crucial for unmanned aerial vehicles (UAVs) and autonomous driving applications. In this study, the authors introduce a real-time multimodal fusion for enhanced detection (RMF-ED), a novel framework designed to overcome the limitations of low-light target detection. By leveraging the complementary capabilities of near-infrared (NIR) cameras and light detection and ranging (LiDAR) sensors, RMF-ED enhances detection performance. An advanced NIR generative adversarial network (NIR-GAN) model was developed to address the lack of annotated NIR datasets, integrating structural similarity index measure (SSIM) loss and L1 loss functions. This approach enables the generation of high-quality NIR images from RGB datasets, bridging a critical gap in training data. Furthermore, the multimodal fusion algorithm integrates RGB images, NIR images, and LiDAR point clouds, ensuring consistency and accuracy in proposal fusion. Experimental results on the KITTI dataset demonstrate that RMF-ED achieves performance comparable to or exceeding state-of-the-art fusion algorithms, with a computational time of only 21 ms. These features make RMF-ED an efficient and versatile solution for real-time applications in low-light environments.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"7 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70011","citationCount":"0","resultStr":"{\"title\":\"RMF-ED: Real-Time Multimodal Fusion for Enhanced Target Detection in Low-Light Environments\",\"authors\":\"Yuhong Wu, Jinkai Cui, Kuoye Niu, Yanlong Lu, Lijun Cheng, Shengze Cai, Chao Xu\",\"doi\":\"10.1049/csy2.70011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Accurate target detection in low-light environments is crucial for unmanned aerial vehicles (UAVs) and autonomous driving applications. In this study, the authors introduce a real-time multimodal fusion for enhanced detection (RMF-ED), a novel framework designed to overcome the limitations of low-light target detection. By leveraging the complementary capabilities of near-infrared (NIR) cameras and light detection and ranging (LiDAR) sensors, RMF-ED enhances detection performance. An advanced NIR generative adversarial network (NIR-GAN) model was developed to address the lack of annotated NIR datasets, integrating structural similarity index measure (SSIM) loss and L1 loss functions. This approach enables the generation of high-quality NIR images from RGB datasets, bridging a critical gap in training data. Furthermore, the multimodal fusion algorithm integrates RGB images, NIR images, and LiDAR point clouds, ensuring consistency and accuracy in proposal fusion. Experimental results on the KITTI dataset demonstrate that RMF-ED achieves performance comparable to or exceeding state-of-the-art fusion algorithms, with a computational time of only 21 ms. These features make RMF-ED an efficient and versatile solution for real-time applications in low-light environments.</p>\",\"PeriodicalId\":34110,\"journal\":{\"name\":\"IET Cybersystems and Robotics\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70011\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Cybersystems and Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/csy2.70011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cybersystems and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/csy2.70011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
RMF-ED: Real-Time Multimodal Fusion for Enhanced Target Detection in Low-Light Environments
Accurate target detection in low-light environments is crucial for unmanned aerial vehicles (UAVs) and autonomous driving applications. In this study, the authors introduce a real-time multimodal fusion for enhanced detection (RMF-ED), a novel framework designed to overcome the limitations of low-light target detection. By leveraging the complementary capabilities of near-infrared (NIR) cameras and light detection and ranging (LiDAR) sensors, RMF-ED enhances detection performance. An advanced NIR generative adversarial network (NIR-GAN) model was developed to address the lack of annotated NIR datasets, integrating structural similarity index measure (SSIM) loss and L1 loss functions. This approach enables the generation of high-quality NIR images from RGB datasets, bridging a critical gap in training data. Furthermore, the multimodal fusion algorithm integrates RGB images, NIR images, and LiDAR point clouds, ensuring consistency and accuracy in proposal fusion. Experimental results on the KITTI dataset demonstrate that RMF-ED achieves performance comparable to or exceeding state-of-the-art fusion algorithms, with a computational time of only 21 ms. These features make RMF-ED an efficient and versatile solution for real-time applications in low-light environments.