Ran Wei , Xinjie Wei , Shucheng Xia , Kan Chang , Mingyang Ling , Jingxiang Nong , Li Xu
{"title":"Multi-scale wavelet feature fusion network for low-light image enhancement","authors":"Ran Wei , Xinjie Wei , Shucheng Xia , Kan Chang , Mingyang Ling , Jingxiang Nong , Li Xu","doi":"10.1016/j.cag.2025.104182","DOIUrl":null,"url":null,"abstract":"<div><div>Low-light image enhancement (LLIE) aims to enhance the visibility and quality of low-light images. However, existing methods often struggle to effectively balance global and local image content, resulting in suboptimal results. To address this challenge, we propose a novel multi-scale wavelet feature fusion network (MWFFnet) for low-light image enhancement. Our approach utilizes a U-shaped architecture where traditional downsampling and upsampling operations are replaced by discrete wavelet transform (DWT) and inverse DWT (IDWT), respectively. This strategy helps to reduce the difficulty of learning the complex mapping from low-light images to well-exposed ones. Furthermore, we incorporate a dual transposed attention (DTA) module for each feature scale. DTA effectively captures long-range dependencies between image contents, thus enhancing the network’s ability to understand intricate image structures. To further improve the enhancement quality, we develop a cross-layer attentional feature fusion (CAFF) module that effectively integrates features from both the encoder and decoder. This mechanism enables the network to leverage contextual information across various levels of representation, resulting in a more comprehensive understanding of the images. Extensive experiments demonstrate that with a reasonable model size, the proposed MWFFnet outperforms several state-of-the-art methods. Our code will be available online.<span><span><sup>2</sup></span></span></div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"127 ","pages":"Article 104182"},"PeriodicalIF":2.5000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000214","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Low-light image enhancement (LLIE) aims to enhance the visibility and quality of low-light images. However, existing methods often struggle to effectively balance global and local image content, resulting in suboptimal results. To address this challenge, we propose a novel multi-scale wavelet feature fusion network (MWFFnet) for low-light image enhancement. Our approach utilizes a U-shaped architecture where traditional downsampling and upsampling operations are replaced by discrete wavelet transform (DWT) and inverse DWT (IDWT), respectively. This strategy helps to reduce the difficulty of learning the complex mapping from low-light images to well-exposed ones. Furthermore, we incorporate a dual transposed attention (DTA) module for each feature scale. DTA effectively captures long-range dependencies between image contents, thus enhancing the network’s ability to understand intricate image structures. To further improve the enhancement quality, we develop a cross-layer attentional feature fusion (CAFF) module that effectively integrates features from both the encoder and decoder. This mechanism enables the network to leverage contextual information across various levels of representation, resulting in a more comprehensive understanding of the images. Extensive experiments demonstrate that with a reasonable model size, the proposed MWFFnet outperforms several state-of-the-art methods. Our code will be available online.2
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.