Huaping Zhou , Tao Wu , Kelei Sun , Jin Wu , Bin Deng , Xueseng Zhang
{"title":"HLGNet: High-Light Guided Network for low-light instance segmentation with spatial-frequency domain enhancement","authors":"Huaping Zhou , Tao Wu , Kelei Sun , Jin Wu , Bin Deng , Xueseng Zhang","doi":"10.1016/j.neunet.2025.107613","DOIUrl":null,"url":null,"abstract":"<div><div>Instance segmentation models generally perform well under typical lighting conditions but struggle in low-light environments due to insufficient fine-grained detail. To address this, frequency domain enhancement has shown promise. However, the lack of spatial domain processing in existing frequency domain based methods often results in poor boundary delineation and inadequate local perception. To address these challenges, we propose HLGNet (High-Light Guided Network). By leveraging high-light image masks, our approach integrates enhancements in both the frequency and spatial domains, thereby improving the feature representation of low-light images. Specifically, we propose the SPE (Spatial-Frequency Enhancement) Block, which effectively combines and complements local spatial features with global frequency domain information. Additionally, we design the DAF (Dynamic Affine Fusion) module to inject frequency domain information into semantically significant features, thereby enhancing the model’s ability to capture both detailed target information and global semantic context. Finally, we propose the HLG Decoder, which dynamically adjusts the attention distribution by utilizing mutual information and entropy, guided by high-light image masks. This ensures improved focus on both local details and global semantics. Extensive quantitative and qualitative evaluations on two widely used low-light instance segmentation datasets demonstrate that HLGNet outperforms current state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107613"},"PeriodicalIF":6.0000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025004939","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Instance segmentation models generally perform well under typical lighting conditions but struggle in low-light environments due to insufficient fine-grained detail. To address this, frequency domain enhancement has shown promise. However, the lack of spatial domain processing in existing frequency domain based methods often results in poor boundary delineation and inadequate local perception. To address these challenges, we propose HLGNet (High-Light Guided Network). By leveraging high-light image masks, our approach integrates enhancements in both the frequency and spatial domains, thereby improving the feature representation of low-light images. Specifically, we propose the SPE (Spatial-Frequency Enhancement) Block, which effectively combines and complements local spatial features with global frequency domain information. Additionally, we design the DAF (Dynamic Affine Fusion) module to inject frequency domain information into semantically significant features, thereby enhancing the model’s ability to capture both detailed target information and global semantic context. Finally, we propose the HLG Decoder, which dynamically adjusts the attention distribution by utilizing mutual information and entropy, guided by high-light image masks. This ensures improved focus on both local details and global semantics. Extensive quantitative and qualitative evaluations on two widely used low-light instance segmentation datasets demonstrate that HLGNet outperforms current state-of-the-art methods.
实例分割模型通常在典型光照条件下表现良好,但在低光照环境下由于细粒度细节不足而表现不佳。为了解决这个问题,频域增强显示出了希望。然而,现有的基于频域的方法缺乏空域处理,往往导致边界描绘不佳和局部感知不足。为了应对这些挑战,我们提出了HLGNet(高光引导网络)。通过利用高光图像掩模,我们的方法集成了频率和空间域的增强,从而改善了低光图像的特征表示。具体来说,我们提出了SPE (spatial - frequency Enhancement,空间频率增强)块,它有效地将局部空间特征与全局频域信息相结合和互补。此外,我们设计了DAF(动态仿射融合)模块,将频域信息注入到语义重要特征中,从而增强了模型捕获详细目标信息和全局语义上下文的能力。最后,我们提出了以高光图像掩模为指导,利用互信息和熵动态调整注意力分布的HLG解码器。这确保了对局部细节和全局语义的更好关注。对两个广泛使用的低光照实例分割数据集进行了大量的定量和定性评估,表明HLGNet优于当前最先进的方法。
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.