Zhenxing Zhu;Shuiyi Hu;Qingjing Ma;Mingsu Lin;Tianqi Yu;Jianling Hu
{"title":"MIVFNet:结合低光场景照明解耦的红外和可见光图像融合","authors":"Zhenxing Zhu;Shuiyi Hu;Qingjing Ma;Mingsu Lin;Tianqi Yu;Jianling Hu","doi":"10.1109/JSEN.2025.3547995","DOIUrl":null,"url":null,"abstract":"Infrared and visible image fusion is a significant technique for image enhancement. However, in low-light scenes, extracting features from visible images is difficult, and most existing fusion methods can hardly capture texture details and prominent infrared targets simultaneously. To address these problems, this article proposes an infrared and visible image fusion method called MIVFNet, combined with illumination decoupling in low-light scenes. This method generates high-quality fusion images in low-light environments through four steps, which comprise four key stages: preprocessing, feature extraction, feature processing, and feature reconstruction. In the preprocessing stage, the reflection component of visible images is extracted using an illumination-decoupling network, and significant features of infrared images are enhanced via iterative least-squares (ILS) filtering and multilevel layered processing. Furthermore, by introducing Laplacian gradient processing into the L-GRB module, the feature extraction network and feature reconstruction network are designed to improve the descriptive performance of texture features. In the feature processing stage, the processed visible features are processed by the contrast enhancement network and concatenated with the extracted infrared image features subsequently. Experiments conducted on multiple datasets confirm that the proposed method can fully extract the visible details and infrared thermal target of the source images in low-light environments and generate a fused image with excellent subjective performance and objective indicators compared with other state-of-the-art fusion methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 9","pages":"16335-16348"},"PeriodicalIF":4.3000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MIVFNet: Infrared and Visible Image Fusion Combined With Scene Illumination Decoupling in Low-Light Scenes\",\"authors\":\"Zhenxing Zhu;Shuiyi Hu;Qingjing Ma;Mingsu Lin;Tianqi Yu;Jianling Hu\",\"doi\":\"10.1109/JSEN.2025.3547995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Infrared and visible image fusion is a significant technique for image enhancement. However, in low-light scenes, extracting features from visible images is difficult, and most existing fusion methods can hardly capture texture details and prominent infrared targets simultaneously. To address these problems, this article proposes an infrared and visible image fusion method called MIVFNet, combined with illumination decoupling in low-light scenes. This method generates high-quality fusion images in low-light environments through four steps, which comprise four key stages: preprocessing, feature extraction, feature processing, and feature reconstruction. In the preprocessing stage, the reflection component of visible images is extracted using an illumination-decoupling network, and significant features of infrared images are enhanced via iterative least-squares (ILS) filtering and multilevel layered processing. Furthermore, by introducing Laplacian gradient processing into the L-GRB module, the feature extraction network and feature reconstruction network are designed to improve the descriptive performance of texture features. In the feature processing stage, the processed visible features are processed by the contrast enhancement network and concatenated with the extracted infrared image features subsequently. Experiments conducted on multiple datasets confirm that the proposed method can fully extract the visible details and infrared thermal target of the source images in low-light environments and generate a fused image with excellent subjective performance and objective indicators compared with other state-of-the-art fusion methods.\",\"PeriodicalId\":447,\"journal\":{\"name\":\"IEEE Sensors Journal\",\"volume\":\"25 9\",\"pages\":\"16335-16348\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-03-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Sensors Journal\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10919051/\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/10919051/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
MIVFNet: Infrared and Visible Image Fusion Combined With Scene Illumination Decoupling in Low-Light Scenes
Infrared and visible image fusion is a significant technique for image enhancement. However, in low-light scenes, extracting features from visible images is difficult, and most existing fusion methods can hardly capture texture details and prominent infrared targets simultaneously. To address these problems, this article proposes an infrared and visible image fusion method called MIVFNet, combined with illumination decoupling in low-light scenes. This method generates high-quality fusion images in low-light environments through four steps, which comprise four key stages: preprocessing, feature extraction, feature processing, and feature reconstruction. In the preprocessing stage, the reflection component of visible images is extracted using an illumination-decoupling network, and significant features of infrared images are enhanced via iterative least-squares (ILS) filtering and multilevel layered processing. Furthermore, by introducing Laplacian gradient processing into the L-GRB module, the feature extraction network and feature reconstruction network are designed to improve the descriptive performance of texture features. In the feature processing stage, the processed visible features are processed by the contrast enhancement network and concatenated with the extracted infrared image features subsequently. Experiments conducted on multiple datasets confirm that the proposed method can fully extract the visible details and infrared thermal target of the source images in low-light environments and generate a fused image with excellent subjective performance and objective indicators compared with other state-of-the-art fusion methods.
期刊介绍:
The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following:
-Sensor Phenomenology, Modelling, and Evaluation
-Sensor Materials, Processing, and Fabrication
-Chemical and Gas Sensors
-Microfluidics and Biosensors
-Optical Sensors
-Physical Sensors: Temperature, Mechanical, Magnetic, and others
-Acoustic and Ultrasonic Sensors
-Sensor Packaging
-Sensor Networks
-Sensor Applications
-Sensor Systems: Signals, Processing, and Interfaces
-Actuators and Sensor Power Systems
-Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting
-Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data)
-Sensors in Industrial Practice