DSAFusion:用于红外和低光可见光图像融合的细节语义感知网络

IF 3.4 3区 物理与天体物理 Q2 INSTRUMENTS & INSTRUMENTATION
Menghan Xia , Cheng Lin , Biyun Xu , Qian Li , Hao Fang , Zhenghua Huang
{"title":"DSAFusion:用于红外和低光可见光图像融合的细节语义感知网络","authors":"Menghan Xia ,&nbsp;Cheng Lin ,&nbsp;Biyun Xu ,&nbsp;Qian Li ,&nbsp;Hao Fang ,&nbsp;Zhenghua Huang","doi":"10.1016/j.infrared.2025.105804","DOIUrl":null,"url":null,"abstract":"<div><div>It is important to simultaneously preserve detail and semantic information in both infrared and visible (especially low-light) images for the pursuit of high-quality fusion maps. Unfortunately, the existing fusion methods fails to balance them, resulting in the fusion results are over-smoothed, low-contrast, and sensitive to application scenarios. To address these problems, this paper develops a detail-semantic-aware network for low-light infared and visible image fusion, termed as DSAFusion. Our DSAFusion mainly includes the following key parts: Firstly, a dual-branch encoder is employed to extract the detail and semantic features in infrared and visible images. Then, the features from the two typical modes are respectively concatenated and fused by detail and semantic information fusion networks (respectively named as DFNet and SFNet). Finally, the fused features contribute to reconstruct the final fusion map by a decoder to decode them. Experimental results in both quantitation and qualification show that our DSAFusion is effective and performs better than the existing SOTA fusion methods on the preservation of textures and semantic information.</div></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"147 ","pages":"Article 105804"},"PeriodicalIF":3.4000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DSAFusion: Detail-semantic-aware network for infrared and low-light visible image fusion\",\"authors\":\"Menghan Xia ,&nbsp;Cheng Lin ,&nbsp;Biyun Xu ,&nbsp;Qian Li ,&nbsp;Hao Fang ,&nbsp;Zhenghua Huang\",\"doi\":\"10.1016/j.infrared.2025.105804\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>It is important to simultaneously preserve detail and semantic information in both infrared and visible (especially low-light) images for the pursuit of high-quality fusion maps. Unfortunately, the existing fusion methods fails to balance them, resulting in the fusion results are over-smoothed, low-contrast, and sensitive to application scenarios. To address these problems, this paper develops a detail-semantic-aware network for low-light infared and visible image fusion, termed as DSAFusion. Our DSAFusion mainly includes the following key parts: Firstly, a dual-branch encoder is employed to extract the detail and semantic features in infrared and visible images. Then, the features from the two typical modes are respectively concatenated and fused by detail and semantic information fusion networks (respectively named as DFNet and SFNet). Finally, the fused features contribute to reconstruct the final fusion map by a decoder to decode them. Experimental results in both quantitation and qualification show that our DSAFusion is effective and performs better than the existing SOTA fusion methods on the preservation of textures and semantic information.</div></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"147 \",\"pages\":\"Article 105804\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-03-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449525000970\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449525000970","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

摘要

为了追求高质量的融合地图,同时保留红外和可见光(特别是低光)图像中的细节和语义信息是很重要的。然而,现有的融合方法不能很好地平衡这两者,导致融合结果过于平滑,对比度低,对应用场景敏感。为了解决这些问题,本文开发了一种用于微光红外和可见光图像融合的细节语义感知网络,称为DSAFusion。我们的DSAFusion主要包括以下几个关键部分:首先,利用双支路编码器提取红外和可见光图像的细节和语义特征;然后,通过细节和语义信息融合网络(分别称为DFNet和SFNet)将两种典型模式的特征进行连接和融合。最后,通过解码器对融合后的特征进行解码,重建最终的融合图。定量和定性的实验结果表明,我们的DSAFusion是有效的,并且在纹理和语义信息的保存方面优于现有的SOTA融合方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DSAFusion: Detail-semantic-aware network for infrared and low-light visible image fusion
It is important to simultaneously preserve detail and semantic information in both infrared and visible (especially low-light) images for the pursuit of high-quality fusion maps. Unfortunately, the existing fusion methods fails to balance them, resulting in the fusion results are over-smoothed, low-contrast, and sensitive to application scenarios. To address these problems, this paper develops a detail-semantic-aware network for low-light infared and visible image fusion, termed as DSAFusion. Our DSAFusion mainly includes the following key parts: Firstly, a dual-branch encoder is employed to extract the detail and semantic features in infrared and visible images. Then, the features from the two typical modes are respectively concatenated and fused by detail and semantic information fusion networks (respectively named as DFNet and SFNet). Finally, the fused features contribute to reconstruct the final fusion map by a decoder to decode them. Experimental results in both quantitation and qualification show that our DSAFusion is effective and performs better than the existing SOTA fusion methods on the preservation of textures and semantic information.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
12.10%
发文量
400
审稿时长
67 days
期刊介绍: The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region. Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine. Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信