在不利环境条件下通过协调注意力融合实现高效的多模态目标检测

IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Xiangjin Zeng , Genghuan Liu , Jianming Chen , Xiaoyan Wu , Jianglei Di , Zhenbo Ren , Yuwen Qin
{"title":"在不利环境条件下通过协调注意力融合实现高效的多模态目标检测","authors":"Xiangjin Zeng ,&nbsp;Genghuan Liu ,&nbsp;Jianming Chen ,&nbsp;Xiaoyan Wu ,&nbsp;Jianglei Di ,&nbsp;Zhenbo Ren ,&nbsp;Yuwen Qin","doi":"10.1016/j.dsp.2024.104873","DOIUrl":null,"url":null,"abstract":"<div><div>Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104873"},"PeriodicalIF":2.9000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions\",\"authors\":\"Xiangjin Zeng ,&nbsp;Genghuan Liu ,&nbsp;Jianming Chen ,&nbsp;Xiaoyan Wu ,&nbsp;Jianglei Di ,&nbsp;Zhenbo Ren ,&nbsp;Yuwen Qin\",\"doi\":\"10.1016/j.dsp.2024.104873\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"156 \",\"pages\":\"Article 104873\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200424004974\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200424004974","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

整合来自多模态图像对的互补视觉信息可以显著提高物体检测算法的鲁棒性和准确性,尤其是在具有挑战性的环境中。然而,在这些算法中有效融合特定模态特征是一个关键挑战。为了解决这个问题,我们在 YOLOv5 物体检测框架的基础上提出了一个新颖的轻量级融合模块,称为 "坐标注意融合(CAF)"模块。CAF 模块利用差异放大和协调注意机制,有选择性地增强独特的跨模态特征,从而保留关键的特定模态信息。为了进一步优化性能和减少计算开销,我们对双流主干网络进行了改进,在不影响准确性的前提下减少了模型的参数数量。在两个基准多模态数据集上进行的综合实验表明,所提出的方法始终超越传统方法,并优于现有的最先进的多模态物体检测算法。这些发现凸显了跨模态融合作为改进不利条件下物体检测的一个有前途的方向的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions
Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Digital Signal Processing
Digital Signal Processing 工程技术-工程:电子与电气
CiteScore
5.30
自引率
17.20%
发文量
435
审稿时长
66 days
期刊介绍: Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal. The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as: • big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信