Cross-Modality Image Matching Network With Modality-Invariant Feature Representation for Airborne-Ground Thermal Infrared and Visible Datasets

IF 8.6 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Song Cui;Ailong Ma;Yuting Wan;Yanfei Zhong;Bin Luo;Miaozhong Xu
{"title":"Cross-Modality Image Matching Network With Modality-Invariant Feature Representation for Airborne-Ground Thermal Infrared and Visible Datasets","authors":"Song Cui;Ailong Ma;Yuting Wan;Yanfei Zhong;Bin Luo;Miaozhong Xu","doi":"10.1109/TGRS.2021.3099506","DOIUrl":null,"url":null,"abstract":"Thermal infrared (TIR) remote-sensing imagery can allow objects to be imaged clearly at night through the long-wave infrared, so that the fusion of thermal infrared and visible (VIS) imagery is a way to improve the remote-sensing interpretation ability. However, due to the large radiation difference between the two kinds of images, it is very difficult to match them. One of the most important issues is the lack of comprehensive consideration of the modality-specific information and modality-shared information, which makes it difficult for the existing methods to obtain a modality-invariant feature representation. In this article, a cross-modality image matching network, which we refer to as CMM-Net, is proposed to realize thermal infrared and visible image matching by learning a modality-invariant feature representation. First, in order to extract the modality-specific features of the imagery, the framework constructs a shallow two-branch network to make full use of the modality-specific information, without sharing parameters. Second, in order to extract the high-level semantic information between the different modalities, modality-shared layers are embedded into the deep layers of the network. In addition, three novel loss functions are designed and combined to learn the modality-invariant feature representation, that is, the discriminative loss of the non-corresponding features in the same modality, the cross-modality loss of the corresponding features between different modalities, and the cross-modality triplet (CMT) loss. The multimodal matching experiments conducted with ground- and airborne-based thermal infrared images and visible images showed that the proposed method outperforms the existing image matching methods by about 2% and 6% for the ground and airborne images, respectively.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"60 ","pages":"1-14"},"PeriodicalIF":8.6000,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/9506998/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 17

Abstract

Thermal infrared (TIR) remote-sensing imagery can allow objects to be imaged clearly at night through the long-wave infrared, so that the fusion of thermal infrared and visible (VIS) imagery is a way to improve the remote-sensing interpretation ability. However, due to the large radiation difference between the two kinds of images, it is very difficult to match them. One of the most important issues is the lack of comprehensive consideration of the modality-specific information and modality-shared information, which makes it difficult for the existing methods to obtain a modality-invariant feature representation. In this article, a cross-modality image matching network, which we refer to as CMM-Net, is proposed to realize thermal infrared and visible image matching by learning a modality-invariant feature representation. First, in order to extract the modality-specific features of the imagery, the framework constructs a shallow two-branch network to make full use of the modality-specific information, without sharing parameters. Second, in order to extract the high-level semantic information between the different modalities, modality-shared layers are embedded into the deep layers of the network. In addition, three novel loss functions are designed and combined to learn the modality-invariant feature representation, that is, the discriminative loss of the non-corresponding features in the same modality, the cross-modality loss of the corresponding features between different modalities, and the cross-modality triplet (CMT) loss. The multimodal matching experiments conducted with ground- and airborne-based thermal infrared images and visible images showed that the proposed method outperforms the existing image matching methods by about 2% and 6% for the ground and airborne images, respectively.
机载地面热红外和可见光数据集模态不变特征表示的跨模态图像匹配网络
热红外(TIR)遥感图像可以使物体在夜间通过长波红外清晰成像,因此热红外和可见光(VIS)图像的融合是提高遥感解释能力的一种方法。然而,由于这两种图像之间的辐射差异很大,因此很难匹配它们。最重要的问题之一是缺乏对模态特定信息和模态共享信息的全面考虑,这使得现有的方法很难获得模态不变的特征表示。在本文中,提出了一种跨模态图像匹配网络,我们称之为CMM Net,通过学习模态不变特征表示来实现热红外和可见光图像的匹配。首先,为了提取图像的模态特定特征,该框架构建了一个浅层的两分支网络,以充分利用模态特定信息,而不共享参数。其次,为了提取不同模态之间的高级语义信息,将模态共享层嵌入到网络的深层。此外,设计并组合了三种新的损失函数来学习模态不变特征表示,即同一模态中非对应特征的判别损失、不同模态之间对应特征的跨模态损失和跨模态三元组(CMT)损失。对基于地面和机载的热红外图像和可见光图像进行的多模式匹配实验表明,该方法在地面和机载图像上分别比现有的图像匹配方法好2%和6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信