FRFusion:一种基于快速傅里叶变换和retinex模型的红外和可见光图像深度融合框架

IF 3.7 2区 工程技术 Q2 OPTICS
Zhen Pei , Jinbo Lu , Jinling Chen , Yongliang Qian , Lihua Fan , Hongyan Wang
{"title":"FRFusion:一种基于快速傅里叶变换和retinex模型的红外和可见光图像深度融合框架","authors":"Zhen Pei ,&nbsp;Jinbo Lu ,&nbsp;Jinling Chen ,&nbsp;Yongliang Qian ,&nbsp;Lihua Fan ,&nbsp;Hongyan Wang","doi":"10.1016/j.optlaseng.2025.109362","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion exploits the complementary characteristics of both modalities to produce a richer and more visually enhanced image. However, most existing methods primarily focus on well-lit conditions and tend to overlook texture and contrast degradation in low-light environments. Furthermore, these approaches often neglect frequency domain information during feature extraction. We propose a network called FRFusion for fusing infrared and visible images in low-light environments to address the aforementioned challenges. Firstly, based on the Retinex model, we designed encoders with different structures to decompose visible images into reflection and illumination components. During this process, we introduced a feature adjustment module (<span><math><mrow><mi>F</mi><mi>A</mi><mi>M</mi></mrow></math></span>) to enable the model to simultaneously extract information from the input image in spatial and frequency domains. It is worth noting that the extraction of infrared features pertains to the encoder structure of the reflection components of visible images. Secondly, in the feature fusion stage, we introduced the dual attention feature fusion module (<span><math><mrow><mi>D</mi><mi>A</mi><mi>F</mi><mi>F</mi><mi>M</mi></mrow></math></span>) to fully integrate the global and local features of infrared and visible images, thereby achieving a more comprehensive synthesis of complementary information. Finally, we propose a brightness adaptive network (<span><math><mrow><mi>B</mi><mi>A</mi><mi>N</mi></mrow></math></span>) for the illumination component, which restores the brightness information of the fused image by adaptively adjusting the brightness features. Experimental results on three public datasets demonstrate that our method excels in both visual quality and evaluation metrics.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"195 ","pages":"Article 109362"},"PeriodicalIF":3.7000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FRFusion: A deep fusion framework for infrared and visible images based on fast fourier transform and retinex model\",\"authors\":\"Zhen Pei ,&nbsp;Jinbo Lu ,&nbsp;Jinling Chen ,&nbsp;Yongliang Qian ,&nbsp;Lihua Fan ,&nbsp;Hongyan Wang\",\"doi\":\"10.1016/j.optlaseng.2025.109362\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion exploits the complementary characteristics of both modalities to produce a richer and more visually enhanced image. However, most existing methods primarily focus on well-lit conditions and tend to overlook texture and contrast degradation in low-light environments. Furthermore, these approaches often neglect frequency domain information during feature extraction. We propose a network called FRFusion for fusing infrared and visible images in low-light environments to address the aforementioned challenges. Firstly, based on the Retinex model, we designed encoders with different structures to decompose visible images into reflection and illumination components. During this process, we introduced a feature adjustment module (<span><math><mrow><mi>F</mi><mi>A</mi><mi>M</mi></mrow></math></span>) to enable the model to simultaneously extract information from the input image in spatial and frequency domains. It is worth noting that the extraction of infrared features pertains to the encoder structure of the reflection components of visible images. Secondly, in the feature fusion stage, we introduced the dual attention feature fusion module (<span><math><mrow><mi>D</mi><mi>A</mi><mi>F</mi><mi>F</mi><mi>M</mi></mrow></math></span>) to fully integrate the global and local features of infrared and visible images, thereby achieving a more comprehensive synthesis of complementary information. Finally, we propose a brightness adaptive network (<span><math><mrow><mi>B</mi><mi>A</mi><mi>N</mi></mrow></math></span>) for the illumination component, which restores the brightness information of the fused image by adaptively adjusting the brightness features. Experimental results on three public datasets demonstrate that our method excels in both visual quality and evaluation metrics.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"195 \",\"pages\":\"Article 109362\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816625005470\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625005470","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

红外和可见光图像融合利用两种模式的互补特性来产生更丰富和视觉增强的图像。然而,大多数现有的方法主要集中在良好的光照条件下,往往忽略了在低光环境下的纹理和对比度退化。此外,这些方法在特征提取过程中往往忽略频域信息。我们提出了一种称为FRFusion的网络,用于融合低光环境下的红外和可见光图像,以解决上述挑战。首先,基于Retinex模型,设计不同结构的编码器,将可见光图像分解为反射和光照分量;在此过程中,我们引入了特征调整模块(FAM),使模型能够同时从输入图像中提取空间和频率域的信息。值得注意的是,红外特征的提取涉及到可见光图像反射分量的编码器结构。其次,在特征融合阶段,引入双关注特征融合模块(dual attention feature fusion module, DAFFM),充分融合红外和可见光图像的全局和局部特征,实现互补信息的更全面综合。最后,我们提出了一种亮度自适应网络(BAN),该网络通过自适应调整亮度特征来恢复融合图像的亮度信息。在三个公共数据集上的实验结果表明,我们的方法在视觉质量和评价指标方面都很出色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FRFusion: A deep fusion framework for infrared and visible images based on fast fourier transform and retinex model
Infrared and visible image fusion exploits the complementary characteristics of both modalities to produce a richer and more visually enhanced image. However, most existing methods primarily focus on well-lit conditions and tend to overlook texture and contrast degradation in low-light environments. Furthermore, these approaches often neglect frequency domain information during feature extraction. We propose a network called FRFusion for fusing infrared and visible images in low-light environments to address the aforementioned challenges. Firstly, based on the Retinex model, we designed encoders with different structures to decompose visible images into reflection and illumination components. During this process, we introduced a feature adjustment module (FAM) to enable the model to simultaneously extract information from the input image in spatial and frequency domains. It is worth noting that the extraction of infrared features pertains to the encoder structure of the reflection components of visible images. Secondly, in the feature fusion stage, we introduced the dual attention feature fusion module (DAFFM) to fully integrate the global and local features of infrared and visible images, thereby achieving a more comprehensive synthesis of complementary information. Finally, we propose a brightness adaptive network (BAN) for the illumination component, which restores the brightness information of the fused image by adaptively adjusting the brightness features. Experimental results on three public datasets demonstrate that our method excels in both visual quality and evaluation metrics.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Optics and Lasers in Engineering
Optics and Lasers in Engineering 工程技术-光学
CiteScore
8.90
自引率
8.70%
发文量
384
审稿时长
42 days
期刊介绍: Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods. Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following: -Optical Metrology- Optical Methods for 3D visualization and virtual engineering- Optical Techniques for Microsystems- Imaging, Microscopy and Adaptive Optics- Computational Imaging- Laser methods in manufacturing- Integrated optical and photonic sensors- Optics and Photonics in Life Science- Hyperspectral and spectroscopic methods- Infrared and Terahertz techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信