{"title":"Dual-Dynamic Cross-Modal Interaction Network for Multimodal Remote Sensing Object Detection","authors":"Wei Bao;Meiyu Huang;Jingjing Hu;Xueshuang Xiang","doi":"10.1109/TGRS.2025.3530085","DOIUrl":null,"url":null,"abstract":"Multimodal remote sensing object detection (MM-RSOD) holds great promise for around-the-clock applications. However, it faces challenges in effectively extracting complementary features due to the modality inconsistency and redundancy. Inconsistency can lead to semantic-spatial misalignment, while redundancy introduces uncertainty that is specific to each modality. To overcome these challenges and enhance complementarity exploration and exploitation, this article proposes a dual-dynamic cross-modal interaction network (DDCINet), a novel framework comprising two key modules: a dual-dynamic cross-modal interaction (DDCI) module and a dynamic feature fusion (DFF) module. The DDCI module simultaneously addresses both modality inconsistency and redundancy by employing a collaborative design of channel-gated spatial cross-attention (CSCA) and cross-modal dynamic filters (CMDFs) on evenly segmented multimodal features. The CSCA component enhances the semantic-spatial correlation between modalities by identifying the most relevant channel-spatial features through cross-attention, addressing modality inconsistency. In parallel, the CMDF component achieves cross-modal context interaction through static convolution and further generates dynamic spatial-variant kernels to filter out irrelevant information between modalities, addressing modality redundancy. Following the improved feature extraction, the DFF module dynamically adjusts interchannel dependencies guided by modal-specific global context to fuse features, achieving better complementarity exploitation. Extensive experiments conducted on three MM-RSOD datasets confirm the superiority and generalizability of the DDCINet framework. Notably, our DDCINet, based on the RoI Transformer benchmark and ResNet50 backbone, achieves 78.4% mAP50 on the DroneVehicle test set and outperforms state-of-the-art (SOTA) methods by large margins.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-13"},"PeriodicalIF":7.5000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10843244/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal remote sensing object detection (MM-RSOD) holds great promise for around-the-clock applications. However, it faces challenges in effectively extracting complementary features due to the modality inconsistency and redundancy. Inconsistency can lead to semantic-spatial misalignment, while redundancy introduces uncertainty that is specific to each modality. To overcome these challenges and enhance complementarity exploration and exploitation, this article proposes a dual-dynamic cross-modal interaction network (DDCINet), a novel framework comprising two key modules: a dual-dynamic cross-modal interaction (DDCI) module and a dynamic feature fusion (DFF) module. The DDCI module simultaneously addresses both modality inconsistency and redundancy by employing a collaborative design of channel-gated spatial cross-attention (CSCA) and cross-modal dynamic filters (CMDFs) on evenly segmented multimodal features. The CSCA component enhances the semantic-spatial correlation between modalities by identifying the most relevant channel-spatial features through cross-attention, addressing modality inconsistency. In parallel, the CMDF component achieves cross-modal context interaction through static convolution and further generates dynamic spatial-variant kernels to filter out irrelevant information between modalities, addressing modality redundancy. Following the improved feature extraction, the DFF module dynamically adjusts interchannel dependencies guided by modal-specific global context to fuse features, achieving better complementarity exploitation. Extensive experiments conducted on three MM-RSOD datasets confirm the superiority and generalizability of the DDCINet framework. Notably, our DDCINet, based on the RoI Transformer benchmark and ResNet50 backbone, achieves 78.4% mAP50 on the DroneVehicle test set and outperforms state-of-the-art (SOTA) methods by large margins.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.