Jinye Peng , Yu Chen , Shenglin Peng , Zhaoke Liu , Jie Chen , Shuyi Qu , Jun Wang
{"title":"BICFusion:一种无监督的红外和可见光图像融合框架,用于超越照明限制","authors":"Jinye Peng , Yu Chen , Shenglin Peng , Zhaoke Liu , Jie Chen , Shuyi Qu , Jun Wang","doi":"10.1016/j.optlastec.2025.113554","DOIUrl":null,"url":null,"abstract":"<div><div>Infrared and visible image fusion is aimed at merging features from both modalities in order to produce a more information-rich fused image. However, the majority of existing methods have overlooked the specific requirements and challenges inherent in fusion tasks under low-light conditions. In such scenes, texture degradation due to poor illumination is common, and furthermore, local overexposure may result in significant information loss. To tackle these challenges, a novel framework named BICFusion is introduced, which addresses these issues through reflectance separation, cross-modal feature compensation, and dual enhancement of texture and contrast. The Retinex theory is employed to design a network that extracts reflectance representing the intrinsic structure and details of the scene from the visible image, thereby providing the fusion result with rich structural information under minimal illumination constraints. The cross-modal feature guidance weighting module (CFGW) is developed to compensate for missing details by leveraging the infrared image when the visible image lacks sufficient texture information due to adverse lighting conditions such as low light or overexposure. Subsequently, the texture enhancement fusion module (TEFM) and the global-local contrast enhancement loss function are proposed to jointly enhance the fusion quality in terms of texture and contrast. Experiments conducted with twelve state-of-the-art methods on three publicly available datasets validate the superior performance of BICFusion in preserving fine details under low-light and overexposed conditions.</div></div>","PeriodicalId":19511,"journal":{"name":"Optics and Laser Technology","volume":"192 ","pages":"Article 113554"},"PeriodicalIF":4.6000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BICFusion: An unsupervised infrared and visible image fusion framework for beyond illumination constraints\",\"authors\":\"Jinye Peng , Yu Chen , Shenglin Peng , Zhaoke Liu , Jie Chen , Shuyi Qu , Jun Wang\",\"doi\":\"10.1016/j.optlastec.2025.113554\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Infrared and visible image fusion is aimed at merging features from both modalities in order to produce a more information-rich fused image. However, the majority of existing methods have overlooked the specific requirements and challenges inherent in fusion tasks under low-light conditions. In such scenes, texture degradation due to poor illumination is common, and furthermore, local overexposure may result in significant information loss. To tackle these challenges, a novel framework named BICFusion is introduced, which addresses these issues through reflectance separation, cross-modal feature compensation, and dual enhancement of texture and contrast. The Retinex theory is employed to design a network that extracts reflectance representing the intrinsic structure and details of the scene from the visible image, thereby providing the fusion result with rich structural information under minimal illumination constraints. The cross-modal feature guidance weighting module (CFGW) is developed to compensate for missing details by leveraging the infrared image when the visible image lacks sufficient texture information due to adverse lighting conditions such as low light or overexposure. Subsequently, the texture enhancement fusion module (TEFM) and the global-local contrast enhancement loss function are proposed to jointly enhance the fusion quality in terms of texture and contrast. Experiments conducted with twelve state-of-the-art methods on three publicly available datasets validate the superior performance of BICFusion in preserving fine details under low-light and overexposed conditions.</div></div>\",\"PeriodicalId\":19511,\"journal\":{\"name\":\"Optics and Laser Technology\",\"volume\":\"192 \",\"pages\":\"Article 113554\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Laser Technology\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0030399225011454\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Laser Technology","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0030399225011454","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
BICFusion: An unsupervised infrared and visible image fusion framework for beyond illumination constraints
Infrared and visible image fusion is aimed at merging features from both modalities in order to produce a more information-rich fused image. However, the majority of existing methods have overlooked the specific requirements and challenges inherent in fusion tasks under low-light conditions. In such scenes, texture degradation due to poor illumination is common, and furthermore, local overexposure may result in significant information loss. To tackle these challenges, a novel framework named BICFusion is introduced, which addresses these issues through reflectance separation, cross-modal feature compensation, and dual enhancement of texture and contrast. The Retinex theory is employed to design a network that extracts reflectance representing the intrinsic structure and details of the scene from the visible image, thereby providing the fusion result with rich structural information under minimal illumination constraints. The cross-modal feature guidance weighting module (CFGW) is developed to compensate for missing details by leveraging the infrared image when the visible image lacks sufficient texture information due to adverse lighting conditions such as low light or overexposure. Subsequently, the texture enhancement fusion module (TEFM) and the global-local contrast enhancement loss function are proposed to jointly enhance the fusion quality in terms of texture and contrast. Experiments conducted with twelve state-of-the-art methods on three publicly available datasets validate the superior performance of BICFusion in preserving fine details under low-light and overexposed conditions.
期刊介绍:
Optics & Laser Technology aims to provide a vehicle for the publication of a broad range of high quality research and review papers in those fields of scientific and engineering research appertaining to the development and application of the technology of optics and lasers. Papers describing original work in these areas are submitted to rigorous refereeing prior to acceptance for publication.
The scope of Optics & Laser Technology encompasses, but is not restricted to, the following areas:
•development in all types of lasers
•developments in optoelectronic devices and photonics
•developments in new photonics and optical concepts
•developments in conventional optics, optical instruments and components
•techniques of optical metrology, including interferometry and optical fibre sensors
•LIDAR and other non-contact optical measurement techniques, including optical methods in heat and fluid flow
•applications of lasers to materials processing, optical NDT display (including holography) and optical communication
•research and development in the field of laser safety including studies of hazards resulting from the applications of lasers (laser safety, hazards of laser fume)
•developments in optical computing and optical information processing
•developments in new optical materials
•developments in new optical characterization methods and techniques
•developments in quantum optics
•developments in light assisted micro and nanofabrication methods and techniques
•developments in nanophotonics and biophotonics
•developments in imaging processing and systems