{"title":"SFA-Net: A SAM-guided focused attention network for multimodal remote sensing image matching","authors":"Tian Gao, Chaozhen Lan, Wenjun Huang, Sheng Wang","doi":"10.1016/j.isprsjprs.2025.02.032","DOIUrl":null,"url":null,"abstract":"<div><div>The robust and accurate matching of multimodal remote sensing images (MRSIs) is crucial for realizing the fusion of multisource remote sensing image information. Traditional matching methods fail to exhibit effective performance when confronted with significant nonlinear radiometric distortions (NRDs) and geometric differences in MRSIs. To address this critical issue, we propose a novel framework called the SAM-guided Focused Attention Network for MRSI matching (SFA-Net). Firstly, we utilize the Segment Anything Model to extract the edge structural features of MRSIs. In the meantime, convolutional neural networks are employed to extract the local deep features of MRSIs. The obtained edge structural features are then used as a prior information to guide the region self-attention network and the focused fusion cross-attention network. This improves the uniqueness of local depth features in a single image and enhances the cross-modal representation of local depth features across different images. Finally, metric learning and optimization algorithms are applied to improve the success rate of feature matching, further enhancing the accuracy and robustness of the matching results. Experimental results on 1050 MRSI pairs confirm that SFA-Net is able to achieve high-quality matching on large-scale challenging MRSI datasets, with good adaptation to severe NRDs and geometric differences. SFA-Net outperforms state-of-the-art algorithms qualitatively and quantitatively, including RIFT, ASS, CoFSM, WSSF, HOWP, CMM-Net, R2D2, ECOTR, and LightGlue. Our code<span><span><sup>1</sup></span></span> and dataset will be made publicly available upon publication of the paper.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"223 ","pages":"Pages 188-206"},"PeriodicalIF":10.6000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625000905","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The robust and accurate matching of multimodal remote sensing images (MRSIs) is crucial for realizing the fusion of multisource remote sensing image information. Traditional matching methods fail to exhibit effective performance when confronted with significant nonlinear radiometric distortions (NRDs) and geometric differences in MRSIs. To address this critical issue, we propose a novel framework called the SAM-guided Focused Attention Network for MRSI matching (SFA-Net). Firstly, we utilize the Segment Anything Model to extract the edge structural features of MRSIs. In the meantime, convolutional neural networks are employed to extract the local deep features of MRSIs. The obtained edge structural features are then used as a prior information to guide the region self-attention network and the focused fusion cross-attention network. This improves the uniqueness of local depth features in a single image and enhances the cross-modal representation of local depth features across different images. Finally, metric learning and optimization algorithms are applied to improve the success rate of feature matching, further enhancing the accuracy and robustness of the matching results. Experimental results on 1050 MRSI pairs confirm that SFA-Net is able to achieve high-quality matching on large-scale challenging MRSI datasets, with good adaptation to severe NRDs and geometric differences. SFA-Net outperforms state-of-the-art algorithms qualitatively and quantitatively, including RIFT, ASS, CoFSM, WSSF, HOWP, CMM-Net, R2D2, ECOTR, and LightGlue. Our code1 and dataset will be made publicly available upon publication of the paper.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.