Nighttime traffic object detection via adaptively integrating event and frame domains

IF 6.2 3区 综合性期刊 Q1 Multidisciplinary
Yu Jiang , Yuehang Wang , Minghao Zhao , Yongji Zhang , Hong Qi
{"title":"Nighttime traffic object detection via adaptively integrating event and frame domains","authors":"Yu Jiang ,&nbsp;Yuehang Wang ,&nbsp;Minghao Zhao ,&nbsp;Yongji Zhang ,&nbsp;Hong Qi","doi":"10.1016/j.fmre.2023.08.004","DOIUrl":null,"url":null,"abstract":"<div><div>Intelligent perception is crucial in Intelligent Transportation Systems (ITS), with vision cameras as critical components. However, traditional RGB cameras exhibit a significant decline in performance when capturing nighttime traffic scenes, limiting their effectiveness in supporting ITS. In contrast, event cameras possess a high dynamic range (140 dB vs. 60 dB for traditional cameras), enabling them to overcome frame degradation in low-light conditions. Recently, multimodal learning paradigms have made substantial progress in various vision tasks, such as image-text retrieval. Motivated by this progress, we propose an adaptive selection and fusion detection method that leverages both event and RGB frame domains to optimize nighttime traffic object detection jointly. To address the challenge of unbalanced multimodal data fusion, we design a learnable adaptive selection and fusion module. This module performs feature ranking and fusion in the channel dimension, allowing efficient multimodal fusion. Additionally, we construct a novel multi-level feature pyramid network based on multimodal attention fusion. This network extracts potential features to enhance robustness in detecting nighttime traffic objects. Furthermore, we curate a dataset for nighttime traffic scenarios comprising RGB frames and corresponding event streams. Through experiments, we demonstrate that our proposed method outperforms current state-of-the-art techniques in event-based, frame-based, and event and frame fusion methods. This highlights the effectiveness of integrating the event and frame domains in enhancing nighttime traffic object detection.</div></div>","PeriodicalId":34602,"journal":{"name":"Fundamental Research","volume":"5 4","pages":"Pages 1633-1644"},"PeriodicalIF":6.2000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fundamental Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667325823002376","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0

Abstract

Intelligent perception is crucial in Intelligent Transportation Systems (ITS), with vision cameras as critical components. However, traditional RGB cameras exhibit a significant decline in performance when capturing nighttime traffic scenes, limiting their effectiveness in supporting ITS. In contrast, event cameras possess a high dynamic range (140 dB vs. 60 dB for traditional cameras), enabling them to overcome frame degradation in low-light conditions. Recently, multimodal learning paradigms have made substantial progress in various vision tasks, such as image-text retrieval. Motivated by this progress, we propose an adaptive selection and fusion detection method that leverages both event and RGB frame domains to optimize nighttime traffic object detection jointly. To address the challenge of unbalanced multimodal data fusion, we design a learnable adaptive selection and fusion module. This module performs feature ranking and fusion in the channel dimension, allowing efficient multimodal fusion. Additionally, we construct a novel multi-level feature pyramid network based on multimodal attention fusion. This network extracts potential features to enhance robustness in detecting nighttime traffic objects. Furthermore, we curate a dataset for nighttime traffic scenarios comprising RGB frames and corresponding event streams. Through experiments, we demonstrate that our proposed method outperforms current state-of-the-art techniques in event-based, frame-based, and event and frame fusion methods. This highlights the effectiveness of integrating the event and frame domains in enhancing nighttime traffic object detection.

Abstract Image

基于自适应整合事件域和框架域的夜间交通目标检测
智能感知在智能交通系统(ITS)中至关重要,视觉摄像头是其关键组成部分。然而,传统的RGB相机在捕捉夜间交通场景时表现出明显的性能下降,限制了它们支持ITS的有效性。相比之下,事件相机具有高动态范围(140 dB与传统相机的60 dB),使它们能够克服低光条件下的帧退化。近年来,多模态学习范式在图像-文本检索等视觉任务中取得了长足的进展。基于此,我们提出了一种利用事件域和RGB帧域共同优化夜间交通目标检测的自适应选择和融合检测方法。为了解决非平衡多模态数据融合的难题,设计了一个可学习的自适应选择与融合模块。该模块在通道维度上进行特征排序和融合,实现高效的多模态融合。此外,我们还构建了一种基于多模态注意力融合的多层次特征金字塔网络。该网络提取潜在特征,增强夜间交通目标检测的鲁棒性。此外,我们策划了一个夜间交通场景的数据集,包括RGB帧和相应的事件流。通过实验,我们证明了我们提出的方法在基于事件、基于框架以及事件和框架融合方法中优于当前最先进的技术。这凸显了将事件域和框架域结合起来增强夜间交通目标检测的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Fundamental Research
Fundamental Research Multidisciplinary-Multidisciplinary
CiteScore
4.00
自引率
1.60%
发文量
294
审稿时长
79 days
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信