Dual-Dynamic Cross-Modal Interaction Network for Multimodal Remote Sensing Object Detection

IF 7.5 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Wei Bao;Meiyu Huang;Jingjing Hu;Xueshuang Xiang
{"title":"Dual-Dynamic Cross-Modal Interaction Network for Multimodal Remote Sensing Object Detection","authors":"Wei Bao;Meiyu Huang;Jingjing Hu;Xueshuang Xiang","doi":"10.1109/TGRS.2025.3530085","DOIUrl":null,"url":null,"abstract":"Multimodal remote sensing object detection (MM-RSOD) holds great promise for around-the-clock applications. However, it faces challenges in effectively extracting complementary features due to the modality inconsistency and redundancy. Inconsistency can lead to semantic-spatial misalignment, while redundancy introduces uncertainty that is specific to each modality. To overcome these challenges and enhance complementarity exploration and exploitation, this article proposes a dual-dynamic cross-modal interaction network (DDCINet), a novel framework comprising two key modules: a dual-dynamic cross-modal interaction (DDCI) module and a dynamic feature fusion (DFF) module. The DDCI module simultaneously addresses both modality inconsistency and redundancy by employing a collaborative design of channel-gated spatial cross-attention (CSCA) and cross-modal dynamic filters (CMDFs) on evenly segmented multimodal features. The CSCA component enhances the semantic-spatial correlation between modalities by identifying the most relevant channel-spatial features through cross-attention, addressing modality inconsistency. In parallel, the CMDF component achieves cross-modal context interaction through static convolution and further generates dynamic spatial-variant kernels to filter out irrelevant information between modalities, addressing modality redundancy. Following the improved feature extraction, the DFF module dynamically adjusts interchannel dependencies guided by modal-specific global context to fuse features, achieving better complementarity exploitation. Extensive experiments conducted on three MM-RSOD datasets confirm the superiority and generalizability of the DDCINet framework. Notably, our DDCINet, based on the RoI Transformer benchmark and ResNet50 backbone, achieves 78.4% mAP50 on the DroneVehicle test set and outperforms state-of-the-art (SOTA) methods by large margins.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-13"},"PeriodicalIF":7.5000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10843244/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal remote sensing object detection (MM-RSOD) holds great promise for around-the-clock applications. However, it faces challenges in effectively extracting complementary features due to the modality inconsistency and redundancy. Inconsistency can lead to semantic-spatial misalignment, while redundancy introduces uncertainty that is specific to each modality. To overcome these challenges and enhance complementarity exploration and exploitation, this article proposes a dual-dynamic cross-modal interaction network (DDCINet), a novel framework comprising two key modules: a dual-dynamic cross-modal interaction (DDCI) module and a dynamic feature fusion (DFF) module. The DDCI module simultaneously addresses both modality inconsistency and redundancy by employing a collaborative design of channel-gated spatial cross-attention (CSCA) and cross-modal dynamic filters (CMDFs) on evenly segmented multimodal features. The CSCA component enhances the semantic-spatial correlation between modalities by identifying the most relevant channel-spatial features through cross-attention, addressing modality inconsistency. In parallel, the CMDF component achieves cross-modal context interaction through static convolution and further generates dynamic spatial-variant kernels to filter out irrelevant information between modalities, addressing modality redundancy. Following the improved feature extraction, the DFF module dynamically adjusts interchannel dependencies guided by modal-specific global context to fuse features, achieving better complementarity exploitation. Extensive experiments conducted on three MM-RSOD datasets confirm the superiority and generalizability of the DDCINet framework. Notably, our DDCINet, based on the RoI Transformer benchmark and ResNet50 backbone, achieves 78.4% mAP50 on the DroneVehicle test set and outperforms state-of-the-art (SOTA) methods by large margins.
面向多模态遥感目标检测的双动态跨模态交互网络
多模态遥感目标检测(MM-RSOD)在全天候应用中具有很大的前景。然而,由于模态不一致和冗余,在有效提取互补特征方面面临挑战。不一致会导致语义空间不对齐,而冗余会引入特定于每种模态的不确定性。为了克服这些挑战,加强互补性的探索和利用,本文提出了一个双动态跨模态交互网络(DDCINet),一个由两个关键模块组成的新框架:双动态跨模态交互(DDCI)模块和动态特征融合(DFF)模块。DDCI模块通过在均匀分割的多模态特征上采用通道控制的空间交叉注意(CSCA)和跨模态动态滤波器(CMDFs)的协同设计,同时解决了模态不一致和冗余问题。CSCA组件通过交叉注意识别最相关的通道空间特征,解决模态不一致问题,从而增强模态之间的语义空间相关性。同时,CMDF组件通过静态卷积实现跨模态上下文交互,并进一步生成动态空间变核,过滤模态之间的无关信息,解决模态冗余问题。在改进的特征提取之后,DFF模块在特定于模态的全局上下文的指导下,动态调整通道间依赖,融合特征,实现更好的互补性利用。在三个MM-RSOD数据集上进行的大量实验证实了DDCINet框架的优越性和通用性。值得注意的是,我们的DDCINet基于RoI Transformer基准和ResNet50骨干网,在无人机测试集上实现了78.4%的mAP50,并且大大优于最先进的(SOTA)方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信