BICFusion:一种无监督的红外和可见光图像融合框架,用于超越照明限制

IF 4.6 2区 物理与天体物理 Q1 OPTICS
Jinye Peng , Yu Chen , Shenglin Peng , Zhaoke Liu , Jie Chen , Shuyi Qu , Jun Wang
{"title":"BICFusion:一种无监督的红外和可见光图像融合框架,用于超越照明限制","authors":"Jinye Peng ,&nbsp;Yu Chen ,&nbsp;Shenglin Peng ,&nbsp;Zhaoke Liu ,&nbsp;Jie Chen ,&nbsp;Shuyi Qu ,&nbsp;Jun Wang","doi":"10.1016/j.optlastec.2025.113554","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion is aimed at merging features from both modalities in order to produce a more information-rich fused image. However, the majority of existing methods have overlooked the specific requirements and challenges inherent in fusion tasks under low-light conditions. In such scenes, texture degradation due to poor illumination is common, and furthermore, local overexposure may result in significant information loss. To tackle these challenges, a novel framework named BICFusion is introduced, which addresses these issues through reflectance separation, cross-modal feature compensation, and dual enhancement of texture and contrast. The Retinex theory is employed to design a network that extracts reflectance representing the intrinsic structure and details of the scene from the visible image, thereby providing the fusion result with rich structural information under minimal illumination constraints. The cross-modal feature guidance weighting module (CFGW) is developed to compensate for missing details by leveraging the infrared image when the visible image lacks sufficient texture information due to adverse lighting conditions such as low light or overexposure. Subsequently, the texture enhancement fusion module (TEFM) and the global-local contrast enhancement loss function are proposed to jointly enhance the fusion quality in terms of texture and contrast. Experiments conducted with twelve state-of-the-art methods on three publicly available datasets validate the superior performance of BICFusion in preserving fine details under low-light and overexposed conditions.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"192 ","pages":"Article 113554"},"PeriodicalIF":4.6000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BICFusion: An unsupervised infrared and visible image fusion framework for beyond illumination constraints\",\"authors\":\"Jinye Peng ,&nbsp;Yu Chen ,&nbsp;Shenglin Peng ,&nbsp;Zhaoke Liu ,&nbsp;Jie Chen ,&nbsp;Shuyi Qu ,&nbsp;Jun Wang\",\"doi\":\"10.1016/j.optlastec.2025.113554\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion is aimed at merging features from both modalities in order to produce a more information-rich fused image. However, the majority of existing methods have overlooked the specific requirements and challenges inherent in fusion tasks under low-light conditions. In such scenes, texture degradation due to poor illumination is common, and furthermore, local overexposure may result in significant information loss. To tackle these challenges, a novel framework named BICFusion is introduced, which addresses these issues through reflectance separation, cross-modal feature compensation, and dual enhancement of texture and contrast. The Retinex theory is employed to design a network that extracts reflectance representing the intrinsic structure and details of the scene from the visible image, thereby providing the fusion result with rich structural information under minimal illumination constraints. The cross-modal feature guidance weighting module (CFGW) is developed to compensate for missing details by leveraging the infrared image when the visible image lacks sufficient texture information due to adverse lighting conditions such as low light or overexposure. Subsequently, the texture enhancement fusion module (TEFM) and the global-local contrast enhancement loss function are proposed to jointly enhance the fusion quality in terms of texture and contrast. Experiments conducted with twelve state-of-the-art methods on three publicly available datasets validate the superior performance of BICFusion in preserving fine details under low-light and overexposed conditions.</div></div>\",\"PeriodicalId\":19511,\"journal\":{\"name\":\"Optics and Laser Technology\",\"volume\":\"192 \",\"pages\":\"Article 113554\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Laser Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0030399225011454\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030399225011454","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

红外和可见光图像融合的目的是融合两种模式的特征,以产生更丰富的信息融合图像。然而,大多数现有的方法都忽略了在弱光条件下融合任务所固有的特定要求和挑战。在这样的场景中,由于光照不足导致的纹理退化是常见的,此外,局部过度曝光可能导致严重的信息丢失。为了解决这些问题,提出了一种名为BICFusion的新框架,该框架通过反射率分离、跨模态特征补偿以及纹理和对比度的双重增强来解决这些问题。利用Retinex理论设计网络,从可见图像中提取代表场景内在结构和细节的反射率,从而在最小光照约束下提供结构信息丰富的融合结果。开发了跨模态特征制导加权模块(CFGW),用于在可见光图像因光照不足或过度曝光等不利条件而缺乏足够纹理信息时,利用红外图像来补偿缺失的细节。随后,提出纹理增强融合模块(TEFM)和全局-局部对比度增强损失函数,从纹理和对比度两方面共同提高融合质量。在3个公开的数据集上,用12种最先进的方法进行了实验,验证了BICFusion在低光和过度曝光条件下保持精细细节的优越性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
BICFusion: An unsupervised infrared and visible image fusion framework for beyond illumination constraints
Infrared and visible image fusion is aimed at merging features from both modalities in order to produce a more information-rich fused image. However, the majority of existing methods have overlooked the specific requirements and challenges inherent in fusion tasks under low-light conditions. In such scenes, texture degradation due to poor illumination is common, and furthermore, local overexposure may result in significant information loss. To tackle these challenges, a novel framework named BICFusion is introduced, which addresses these issues through reflectance separation, cross-modal feature compensation, and dual enhancement of texture and contrast. The Retinex theory is employed to design a network that extracts reflectance representing the intrinsic structure and details of the scene from the visible image, thereby providing the fusion result with rich structural information under minimal illumination constraints. The cross-modal feature guidance weighting module (CFGW) is developed to compensate for missing details by leveraging the infrared image when the visible image lacks sufficient texture information due to adverse lighting conditions such as low light or overexposure. Subsequently, the texture enhancement fusion module (TEFM) and the global-local contrast enhancement loss function are proposed to jointly enhance the fusion quality in terms of texture and contrast. Experiments conducted with twelve state-of-the-art methods on three publicly available datasets validate the superior performance of BICFusion in preserving fine details under low-light and overexposed conditions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.50
自引率
10.00%
发文量
1060
审稿时长
3.4 months
期刊介绍: Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication. The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas: •development in all types of lasers •developments in optoelectronic devices and photonics •developments in new photonics and optical concepts •developments in conventional optics, optical instruments and components •techniques of optical metrology, including interferometry and optical fibre sensors •LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow •applications of lasers to materials processing, optical NDT display (including holography) and optical communication •research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume) •developments in optical computing and optical information processing •developments in new optical materials •developments in new optical characterization methods and techniques •developments in quantum optics •developments in light assisted micro and nanofabrication methods and techniques •developments in nanophotonics and biophotonics •developments in imaging processing and systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信