Zhen Pei , Jinbo Lu , Jinling Chen , Yongliang Qian , Lihua Fan , Hongyan Wang
{"title":"FRFusion: A deep fusion framework for infrared and visible images based on fast fourier transform and retinex model","authors":"Zhen Pei , Jinbo Lu , Jinling Chen , Yongliang Qian , Lihua Fan , Hongyan Wang","doi":"10.1016/j.optlaseng.2025.109362","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion exploits the complementary characteristics of both modalities to produce a richer and more visually enhanced image. However, most existing methods primarily focus on well-lit conditions and tend to overlook texture and contrast degradation in low-light environments. Furthermore, these approaches often neglect frequency domain information during feature extraction. We propose a network called FRFusion for fusing infrared and visible images in low-light environments to address the aforementioned challenges. Firstly, based on the Retinex model, we designed encoders with different structures to decompose visible images into reflection and illumination components. During this process, we introduced a feature adjustment module (<span><math><mrow><mi>F</mi><mi>A</mi><mi>M</mi></mrow></math></span>) to enable the model to simultaneously extract information from the input image in spatial and frequency domains. It is worth noting that the extraction of infrared features pertains to the encoder structure of the reflection components of visible images. Secondly, in the feature fusion stage, we introduced the dual attention feature fusion module (<span><math><mrow><mi>D</mi><mi>A</mi><mi>F</mi><mi>F</mi><mi>M</mi></mrow></math></span>) to fully integrate the global and local features of infrared and visible images, thereby achieving a more comprehensive synthesis of complementary information. Finally, we propose a brightness adaptive network (<span><math><mrow><mi>B</mi><mi>A</mi><mi>N</mi></mrow></math></span>) for the illumination component, which restores the brightness information of the fused image by adaptively adjusting the brightness features. Experimental results on three public datasets demonstrate that our method excels in both visual quality and evaluation metrics.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"195 ","pages":"Article 109362"},"PeriodicalIF":3.7000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625005470","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Infrared and visible image fusion exploits the complementary characteristics of both modalities to produce a richer and more visually enhanced image. However, most existing methods primarily focus on well-lit conditions and tend to overlook texture and contrast degradation in low-light environments. Furthermore, these approaches often neglect frequency domain information during feature extraction. We propose a network called FRFusion for fusing infrared and visible images in low-light environments to address the aforementioned challenges. Firstly, based on the Retinex model, we designed encoders with different structures to decompose visible images into reflection and illumination components. During this process, we introduced a feature adjustment module () to enable the model to simultaneously extract information from the input image in spatial and frequency domains. It is worth noting that the extraction of infrared features pertains to the encoder structure of the reflection components of visible images. Secondly, in the feature fusion stage, we introduced the dual attention feature fusion module () to fully integrate the global and local features of infrared and visible images, thereby achieving a more comprehensive synthesis of complementary information. Finally, we propose a brightness adaptive network () for the illumination component, which restores the brightness information of the fused image by adaptively adjusting the brightness features. Experimental results on three public datasets demonstrate that our method excels in both visual quality and evaluation metrics.
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques