AFIRE:用于在异构成像环境中提取适应光照的特征的自适应 FusionNet

IF 3.1 3区 物理与天体物理 Q2 INSTRUMENTS & INSTRUMENTATION
Mingxin Yu , Xufan Miao , Yichen Sun , Yuchen Bai , Lianqing Zhu
{"title":"AFIRE:用于在异构成像环境中提取适应光照的特征的自适应 FusionNet","authors":"Mingxin Yu ,&nbsp;Xufan Miao ,&nbsp;Yichen Sun ,&nbsp;Yuchen Bai ,&nbsp;Lianqing Zhu","doi":"10.1016/j.infrared.2024.105557","DOIUrl":null,"url":null,"abstract":"<div><p>The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: <span><span>https://www.github.com/ISCLab-Bistu/AFIRE</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105557"},"PeriodicalIF":3.1000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AFIRE: Adaptive FusionNet for illumination-robust feature extraction in heterogeneous imaging environments\",\"authors\":\"Mingxin Yu ,&nbsp;Xufan Miao ,&nbsp;Yichen Sun ,&nbsp;Yuchen Bai ,&nbsp;Lianqing Zhu\",\"doi\":\"10.1016/j.infrared.2024.105557\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: <span><span>https://www.github.com/ISCLab-Bistu/AFIRE</span><svg><path></path></svg></span>.</p></div>\",\"PeriodicalId\":13549,\"journal\":{\"name\":\"Infrared Physics & Technology\",\"volume\":\"142 \",\"pages\":\"Article 105557\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infrared Physics & Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1350449524004419\",\"RegionNum\":3,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INSTRUMENTS & INSTRUMENTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infrared Physics & Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350449524004419","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

摘要

红外图像和可见光图像的融合旨在通过利用每种模式的不同特征,合成包含更丰富信息的融合图像。然而,输入图像在红外光和可见光方面的不同质量严重影响了融合性能。为了解决这个问题,我们提出了一种新颖的深度自适应融合方法,称为 "光照稳定特征提取自适应融合网络(AFIRE)"。该方法涉及两个输入特征的交互式处理,并根据不同的光照条件动态调整融合权重。具体来说,我们在红外光和可见光的特征提取阶段引入了一种新颖的交互式提取结构,从而能够捕捉到更多互补信息。此外,我们还设计了一个深度自适应融合模块,用于评估输入特征的质量,并通过通道关注机制执行加权融合。最后,我们结合输入图像的熵和中值制定了一个新的损失函数,用于指导融合网络的训练。大量实验证明,AFIRE 在保留像素强度分布和纹理细节方面优于最先进的方法。源代码见:https://www.github.com/ISCLab-Bistu/AFIRE。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AFIRE: Adaptive FusionNet for illumination-robust feature extraction in heterogeneous imaging environments

The fusion of infrared and visible images aims to synthesize a fused image that incorporates richer information by leveraging the distinct characteristics of each modality. However, the disparate quality of input images in terms of infrared and visible light significantly impacts fusion performance. To address this issue, we propose a novel deep adaptive fusion method called Adaptive FusionNet for Illumination-Robust Feature Extraction (AFIRE). This method involves the interactive processing of two input features and dynamically adjusts the fusion weights based on varying illumination conditions. Specifically, we introduce a novel interactive extraction structure during the feature extraction stage for both infrared and visible light, enabling the capture of more complementary information. Additionally, we design a Deep Adaptive Fusion module to assess the quality of input features and perform weighted fusion through a channel attention mechanism. Finally, a new loss function is formulated by incorporating the entropy and median of input images to guide the training of the fusion network. Extensive experiments demonstrate that AFIRE outperforms state-of-the-art methods in preserving pixel intensity distribution and texture details. Source code is available at: https://www.github.com/ISCLab-Bistu/AFIRE.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
12.10%
发文量
400
审稿时长
67 days
期刊介绍: The Journal covers the entire field of infrared physics and technology: theory, experiment, application, devices and instrumentation. Infrared'' is defined as covering the near, mid and far infrared (terahertz) regions from 0.75um (750nm) to 1mm (300GHz.) Submissions in the 300GHz to 100GHz region may be accepted at the editors discretion if their content is relevant to shorter wavelengths. Submissions must be primarily concerned with and directly relevant to this spectral region. Its core topics can be summarized as the generation, propagation and detection, of infrared radiation; the associated optics, materials and devices; and its use in all fields of science, industry, engineering and medicine. Infrared techniques occur in many different fields, notably spectroscopy and interferometry; material characterization and processing; atmospheric physics, astronomy and space research. Scientific aspects include lasers, quantum optics, quantum electronics, image processing and semiconductor physics. Some important applications are medical diagnostics and treatment, industrial inspection and environmental monitoring.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信