基于模态特征增强的低光环境下可见光与红外图像融合定位

IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Shan Su;Li Yan;Yuquan Zhou;Pinzhuo Wang;Changjun Chen
{"title":"基于模态特征增强的低光环境下可见光与红外图像融合定位","authors":"Shan Su;Li Yan;Yuquan Zhou;Pinzhuo Wang;Changjun Chen","doi":"10.1109/JSEN.2025.3576989","DOIUrl":null,"url":null,"abstract":"In low-light environments, the imaging quality of single-modal red green blue (RGB) sensors severely deteriorates, leading to blurred textures and loss of effective information, which affects the accuracy of downstream tasks. Our goal is to provide robust feature extraction in low-light environments by fusing visible and infrared images. Infrared and visible fusion is an important and effective image enhancement technique, aiming to generate high-quality fused images with prominent targets and rich textures in challenging environments. Hence, we propose an unsupervised enhanced infrared and visible fusion method for low-light environments. The method first designs a module for single-modal image feature enhancement based on infrared and visible images for low-light conditions, initially improving the quality of the original infrared and visible images. Subsequently, a cross-modal feature enhancement module based on edge/texture information extraction and guidance is proposed to enhance the edge structures and texture details in the fused features. Specifically, the network utilizes the ResNet as the backbone for single-modal image feature extraction and enhancement, employing channel and spatial attention mechanisms to enhance single-modal image features. An infrared/visible image edge/texture information guidance module is added, leveraging the complementary edge/texture features provided by the two different modalities to guide the learning of the other modality image, thereby achieving the goal of cross-modal image enhancement in low-light environments. In the fusion stage, the DenseNet is employed as the unsupervised fusion network framework. Based on image information measurements, a patch-based loss function with regional weighting is designed, enabling the fused image to dynamically learn advantageous features from different regions of different modal images, achieving complementary feature enhancement of infrared and visible images. Through qualitative and quantitative analyses of three datasets, compared with eight other state of the art (SOTA) methods, the proposed method demonstrates balanced performance, achieving high-quality fusion of multimodal features in low-light environments.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"28476-28492"},"PeriodicalIF":4.3000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visible and Infrared Image Fusion Based on Modality Feature Enhancement for Localization in Low-Light Environments\",\"authors\":\"Shan Su;Li Yan;Yuquan Zhou;Pinzhuo Wang;Changjun Chen\",\"doi\":\"10.1109/JSEN.2025.3576989\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In low-light environments, the imaging quality of single-modal red green blue (RGB) sensors severely deteriorates, leading to blurred textures and loss of effective information, which affects the accuracy of downstream tasks. Our goal is to provide robust feature extraction in low-light environments by fusing visible and infrared images. Infrared and visible fusion is an important and effective image enhancement technique, aiming to generate high-quality fused images with prominent targets and rich textures in challenging environments. Hence, we propose an unsupervised enhanced infrared and visible fusion method for low-light environments. The method first designs a module for single-modal image feature enhancement based on infrared and visible images for low-light conditions, initially improving the quality of the original infrared and visible images. Subsequently, a cross-modal feature enhancement module based on edge/texture information extraction and guidance is proposed to enhance the edge structures and texture details in the fused features. Specifically, the network utilizes the ResNet as the backbone for single-modal image feature extraction and enhancement, employing channel and spatial attention mechanisms to enhance single-modal image features. An infrared/visible image edge/texture information guidance module is added, leveraging the complementary edge/texture features provided by the two different modalities to guide the learning of the other modality image, thereby achieving the goal of cross-modal image enhancement in low-light environments. In the fusion stage, the DenseNet is employed as the unsupervised fusion network framework. Based on image information measurements, a patch-based loss function with regional weighting is designed, enabling the fused image to dynamically learn advantageous features from different regions of different modal images, achieving complementary feature enhancement of infrared and visible images. Through qualitative and quantitative analyses of three datasets, compared with eight other state of the art (SOTA) methods, the proposed method demonstrates balanced performance, achieving high-quality fusion of multimodal features in low-light environments.\",\"PeriodicalId\":447,\"journal\":{\"name\":\"IEEE Sensors Journal\",\"volume\":\"25 15\",\"pages\":\"28476-28492\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2025-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Sensors Journal\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11031096/\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/11031096/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

在低光环境下,单模态红绿蓝(RGB)传感器成像质量严重下降,导致纹理模糊和有效信息丢失,影响下游任务的准确性。我们的目标是通过融合可见光和红外图像,在低光环境下提供鲁棒的特征提取。红外与可见光融合是一种重要而有效的图像增强技术,其目的是在具有挑战性的环境中生成目标突出、纹理丰富的高质量融合图像。因此,我们提出了一种无监督增强的低光环境红外和可见光融合方法。该方法首先设计了弱光条件下基于红外和可见光图像的单模态图像特征增强模块,初步提高了原始红外和可见光图像的质量。随后,提出了一种基于边缘/纹理信息提取和引导的跨模态特征增强模块,增强融合特征中的边缘结构和纹理细节。具体而言,该网络以ResNet为骨干进行单模态图像特征提取和增强,利用通道和空间注意机制增强单模态图像特征。增加红外/可见光图像边缘/纹理信息引导模块,利用两种不同模态提供的互补边缘/纹理特征,引导对另一模态图像的学习,从而实现低光环境下跨模态图像增强的目的。在融合阶段,采用DenseNet作为无监督融合网络框架。在图像信息测量的基础上,设计了基于patch的区域加权损失函数,使融合后的图像能够从不同模态图像的不同区域动态学习优势特征,实现红外和可见光图像的互补特征增强。通过对三个数据集的定性和定量分析,与其他八种先进的SOTA方法相比,该方法表现出平衡的性能,在低光环境下实现了高质量的多模态特征融合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Visible and Infrared Image Fusion Based on Modality Feature Enhancement for Localization in Low-Light Environments
In low-light environments, the imaging quality of single-modal red green blue (RGB) sensors severely deteriorates, leading to blurred textures and loss of effective information, which affects the accuracy of downstream tasks. Our goal is to provide robust feature extraction in low-light environments by fusing visible and infrared images. Infrared and visible fusion is an important and effective image enhancement technique, aiming to generate high-quality fused images with prominent targets and rich textures in challenging environments. Hence, we propose an unsupervised enhanced infrared and visible fusion method for low-light environments. The method first designs a module for single-modal image feature enhancement based on infrared and visible images for low-light conditions, initially improving the quality of the original infrared and visible images. Subsequently, a cross-modal feature enhancement module based on edge/texture information extraction and guidance is proposed to enhance the edge structures and texture details in the fused features. Specifically, the network utilizes the ResNet as the backbone for single-modal image feature extraction and enhancement, employing channel and spatial attention mechanisms to enhance single-modal image features. An infrared/visible image edge/texture information guidance module is added, leveraging the complementary edge/texture features provided by the two different modalities to guide the learning of the other modality image, thereby achieving the goal of cross-modal image enhancement in low-light environments. In the fusion stage, the DenseNet is employed as the unsupervised fusion network framework. Based on image information measurements, a patch-based loss function with regional weighting is designed, enabling the fused image to dynamically learn advantageous features from different regions of different modal images, achieving complementary feature enhancement of infrared and visible images. Through qualitative and quantitative analyses of three datasets, compared with eight other state of the art (SOTA) methods, the proposed method demonstrates balanced performance, achieving high-quality fusion of multimodal features in low-light environments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Sensors Journal
IEEE Sensors Journal 工程技术-工程:电子与电气
CiteScore
7.70
自引率
14.00%
发文量
2058
审稿时长
5.2 months
期刊介绍: The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following: -Sensor Phenomenology, Modelling, and Evaluation -Sensor Materials, Processing, and Fabrication -Chemical and Gas Sensors -Microfluidics and Biosensors -Optical Sensors -Physical Sensors: Temperature, Mechanical, Magnetic, and others -Acoustic and Ultrasonic Sensors -Sensor Packaging -Sensor Networks -Sensor Applications -Sensor Systems: Signals, Processing, and Interfaces -Actuators and Sensor Power Systems -Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting -Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data) -Sensors in Industrial Practice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信