用于红外和可见光图像融合的自适应渐进网络

IF 4.2 2区 地球科学 Q2 ENVIRONMENTAL SCIENCES
Remote Sensing Pub Date : 2024-09-11 DOI:10.3390/rs16183370
Shuying Li, Muyi Han, Yuemei Qin, Qiang Li
{"title":"用于红外和可见光图像融合的自适应渐进网络","authors":"Shuying Li, Muyi Han, Yuemei Qin, Qiang Li","doi":"10.3390/rs16183370","DOIUrl":null,"url":null,"abstract":"Visible and infrared image fusion is a strategy that effectively extracts and fuses information from different sources. However, most existing methods largely neglect the issue of lighting imbalance, which makes the same fusion models inapplicable to different scenes. Several methods obtain low-level features from visible and infrared images at an early stage of input or shallow feature extraction. However, these methods do not explore how low-level features provide a foundation for recognizing and utilizing the complementarity and common information between the two types of images. As a result, the complementarity and common information between the images is not fully analyzed and discussed. To address these issues, we propose a Self-Attention Progressive Network for the fusion of infrared and visible images in this paper. Firstly, we construct a Lighting-Aware Sub-Network to analyze lighting distribution, and introduce intensity loss to measure the probability of scene illumination. This approach enhances the model’s adaptability to lighting conditions. Secondly, we introduce self-attention learning to design a multi-state joint feature extraction module (MSJFEM) that fully utilizes the contextual information among input keys. It guides the learning of a dynamic attention matrix to strengthen the capacity for visual representation. Finally, we design a Difference-Aware Propagation Module (DAPM) to extract and integrate edge details from the source images while supplementing differential information. The experiments across three benchmark datasets reveal that the proposed approach exhibits satisfactory performance compared to existing methods.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"400 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Attention Progressive Network for Infrared and Visible Image Fusion\",\"authors\":\"Shuying Li, Muyi Han, Yuemei Qin, Qiang Li\",\"doi\":\"10.3390/rs16183370\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visible and infrared image fusion is a strategy that effectively extracts and fuses information from different sources. However, most existing methods largely neglect the issue of lighting imbalance, which makes the same fusion models inapplicable to different scenes. Several methods obtain low-level features from visible and infrared images at an early stage of input or shallow feature extraction. However, these methods do not explore how low-level features provide a foundation for recognizing and utilizing the complementarity and common information between the two types of images. As a result, the complementarity and common information between the images is not fully analyzed and discussed. To address these issues, we propose a Self-Attention Progressive Network for the fusion of infrared and visible images in this paper. Firstly, we construct a Lighting-Aware Sub-Network to analyze lighting distribution, and introduce intensity loss to measure the probability of scene illumination. This approach enhances the model’s adaptability to lighting conditions. Secondly, we introduce self-attention learning to design a multi-state joint feature extraction module (MSJFEM) that fully utilizes the contextual information among input keys. It guides the learning of a dynamic attention matrix to strengthen the capacity for visual representation. Finally, we design a Difference-Aware Propagation Module (DAPM) to extract and integrate edge details from the source images while supplementing differential information. The experiments across three benchmark datasets reveal that the proposed approach exhibits satisfactory performance compared to existing methods.\",\"PeriodicalId\":48993,\"journal\":{\"name\":\"Remote Sensing\",\"volume\":\"400 1\",\"pages\":\"\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.3390/rs16183370\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/rs16183370","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

可见光和红外图像融合是一种有效提取和融合不同来源信息的策略。然而,大多数现有方法在很大程度上忽视了光照不平衡的问题,这使得相同的融合模型不适用于不同的场景。有几种方法在输入或浅层特征提取的早期阶段就从可见光和红外图像中获取低层特征。然而,这些方法并未探讨低层次特征如何为识别和利用两类图像之间的互补性和共同信息奠定基础。因此,图像之间的互补性和共同信息没有得到充分的分析和讨论。针对这些问题,我们在本文中提出了一种用于红外图像和可见光图像融合的自关注渐进网络。首先,我们构建了光照感知子网络来分析光照分布,并引入强度损失来衡量场景光照的概率。这种方法增强了模型对光照条件的适应性。其次,我们引入自我注意力学习,设计了一个多状态联合特征提取模块(MSJFEM),充分利用了输入按键之间的上下文信息。它能指导动态注意力矩阵的学习,从而增强视觉表征能力。最后,我们设计了差分感知传播模块(DAPM),以提取和整合源图像中的边缘细节,同时补充差分信息。三个基准数据集的实验表明,与现有方法相比,所提出的方法表现出令人满意的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Self-Attention Progressive Network for Infrared and Visible Image Fusion
Visible and infrared image fusion is a strategy that effectively extracts and fuses information from different sources. However, most existing methods largely neglect the issue of lighting imbalance, which makes the same fusion models inapplicable to different scenes. Several methods obtain low-level features from visible and infrared images at an early stage of input or shallow feature extraction. However, these methods do not explore how low-level features provide a foundation for recognizing and utilizing the complementarity and common information between the two types of images. As a result, the complementarity and common information between the images is not fully analyzed and discussed. To address these issues, we propose a Self-Attention Progressive Network for the fusion of infrared and visible images in this paper. Firstly, we construct a Lighting-Aware Sub-Network to analyze lighting distribution, and introduce intensity loss to measure the probability of scene illumination. This approach enhances the model’s adaptability to lighting conditions. Secondly, we introduce self-attention learning to design a multi-state joint feature extraction module (MSJFEM) that fully utilizes the contextual information among input keys. It guides the learning of a dynamic attention matrix to strengthen the capacity for visual representation. Finally, we design a Difference-Aware Propagation Module (DAPM) to extract and integrate edge details from the source images while supplementing differential information. The experiments across three benchmark datasets reveal that the proposed approach exhibits satisfactory performance compared to existing methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Remote Sensing
Remote Sensing REMOTE SENSING-
CiteScore
8.30
自引率
24.00%
发文量
5435
审稿时长
20.66 days
期刊介绍: Remote Sensing (ISSN 2072-4292) publishes regular research papers, reviews, letters and communications covering all aspects of the remote sensing process, from instrument design and signal processing to the retrieval of geophysical parameters and their application in geosciences. Our aim is to encourage scientists to publish experimental, theoretical and computational results in as much detail as possible so that results can be easily reproduced. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信