Multimodal Remote Sensing Image Registration via Modality Perception and Self-Supervised Position Estimation

IF 7.5 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Yun Xiao;Chunlei Zhang;Bo Jiang;Yuan Chen;Jin Tang
{"title":"Multimodal Remote Sensing Image Registration via Modality Perception and Self-Supervised Position Estimation","authors":"Yun Xiao;Chunlei Zhang;Bo Jiang;Yuan Chen;Jin Tang","doi":"10.1109/TGRS.2025.3576290","DOIUrl":null,"url":null,"abstract":"Multimodal remote sensing image registration ensures that images from different sensors or modalities are spatial and informational consistent for effective comparison and analysis. However, due to the nonlinear modality gaps that exist between images, it is difficult to focus solely on spatial positional differences while ignoring the modality gaps. In this article, to address this issue, we propose a new framework for multimodal registration network, named MMRNet. The proposed framework comprises the following main aspects. First, a novel self-supervised positional misalignment estimator (PME) is designed for multimodal image registration. PME can efficiently overcome the modality gaps and learn the positional differences between multimodal images more reliably, optimizing the registration loss by minimizing the positional differences directly. Then, a new paradigm of modality translation, termed modality perception module (MPM), is introduced to effectively learn modality gaps and perform modality translation in the case of positional misalignment. Finally, we further design the modality perception guidance loss to supervise the modality translation task, which can encourage the fidelity of the generated pseudo-modality images. Our registration network integrates both rigid registration model and nonrigid registration model. The experimental results demonstrate that the proposed registration framework can obtain obviously superior performance in both rigid and nonrigid image registration tasks on optical-synthetic aperture radar (SAR) data, optical-map data, and optical-infrared data. The code and relevant dataset will be made publicly available at <uri>https://github.com/Ahuer-Lei/MMRNet</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-14"},"PeriodicalIF":7.5000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11021679/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal remote sensing image registration ensures that images from different sensors or modalities are spatial and informational consistent for effective comparison and analysis. However, due to the nonlinear modality gaps that exist between images, it is difficult to focus solely on spatial positional differences while ignoring the modality gaps. In this article, to address this issue, we propose a new framework for multimodal registration network, named MMRNet. The proposed framework comprises the following main aspects. First, a novel self-supervised positional misalignment estimator (PME) is designed for multimodal image registration. PME can efficiently overcome the modality gaps and learn the positional differences between multimodal images more reliably, optimizing the registration loss by minimizing the positional differences directly. Then, a new paradigm of modality translation, termed modality perception module (MPM), is introduced to effectively learn modality gaps and perform modality translation in the case of positional misalignment. Finally, we further design the modality perception guidance loss to supervise the modality translation task, which can encourage the fidelity of the generated pseudo-modality images. Our registration network integrates both rigid registration model and nonrigid registration model. The experimental results demonstrate that the proposed registration framework can obtain obviously superior performance in both rigid and nonrigid image registration tasks on optical-synthetic aperture radar (SAR) data, optical-map data, and optical-infrared data. The code and relevant dataset will be made publicly available at https://github.com/Ahuer-Lei/MMRNet.
基于模态感知和自监督位置估计的多模态遥感图像配准
多模态遥感图像配准可确保来自不同传感器或模态的图像在空间和信息上保持一致,以便进行有效的比较和分析。然而,由于图像之间存在非线性模态间隙,很难只关注空间位置差异而忽略模态间隙。为了解决这一问题,本文提出了一种新的多模态注册网络框架——MMRNet。建议的架构包括以下主要方面:首先,针对多模态图像配准问题,设计了一种新的自监督位置不对准估计器。PME可以有效地克服模态间隙,更可靠地学习多模态图像之间的位置差异,通过直接最小化位置差异来优化配准损失。然后,引入了一种新的模态翻译范式,即模态感知模块(MPM),以有效地学习模态间隙,并在位置错位的情况下进行模态翻译。最后,我们进一步设计了情态感知引导损失来监督情态翻译任务,这可以提高生成的伪情态图像的保真度。我们的注册网络集成了刚性注册模式和非刚性注册模式。实验结果表明,所提出的配准框架在光学合成孔径雷达(SAR)数据、光学地图数据以及光学红外数据的刚性和非刚性图像配准任务中都能获得明显优越的性能。代码和相关数据集将在https://github.com/Ahuer-Lei/MMRNet上公开提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信