{"title":"基于扩散的Kolmogorov-Arnold网络(KANs)弱光图像增强","authors":"Chia-Hung Yeh , Cheng-Yue Liou","doi":"10.1016/j.array.2025.100431","DOIUrl":null,"url":null,"abstract":"<div><div>Low-light image enhancement is a fundamental task in computer vision, playing a critical role in applications such as autonomous driving, surveillance, and aerial imaging. However, low-light images often suffer from severe noise, loss of detail, and poor contrast, which degrade visual quality and hinder downstream tasks. Traditional stable diffusion-based enhancement methods apply noise uniformly across the entire image during the denoising process, leading to unnecessary detail degradation in texture-rich areas. To address this limitation, we propose an adaptive noise modulation framework that integrates Kolmogorov-Arnold Networks (KANs) into the diffusion process. Unlike conventional approaches, our method leverages KANs to analyze local image structures and selectively control noise distribution, ensuring that critical details are preserved while effectively enhancing darker regions. By iteratively injecting and removing noise through a structure-aware diffusion mechanism, our model progressively refines image features, achieving stable and high-fidelity restoration. Extensive experiments on multiple low-light datasets demonstrate that our method achieves 20.31 dB PSNR and 0.137 LPIPS on the LOL-v2 dataset, outperforming state-of-the-art methods such as EnlightenGAN and PairLIE. Moreover, our model maintains high efficiency with only 0.08M parameters and 13.72G FLOPs, making it well-suited for real-world deployment.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100431"},"PeriodicalIF":4.5000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Diffusion-based low-light image enhancement with Kolmogorov-Arnold Networks (KANs)\",\"authors\":\"Chia-Hung Yeh , Cheng-Yue Liou\",\"doi\":\"10.1016/j.array.2025.100431\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Low-light image enhancement is a fundamental task in computer vision, playing a critical role in applications such as autonomous driving, surveillance, and aerial imaging. However, low-light images often suffer from severe noise, loss of detail, and poor contrast, which degrade visual quality and hinder downstream tasks. Traditional stable diffusion-based enhancement methods apply noise uniformly across the entire image during the denoising process, leading to unnecessary detail degradation in texture-rich areas. To address this limitation, we propose an adaptive noise modulation framework that integrates Kolmogorov-Arnold Networks (KANs) into the diffusion process. Unlike conventional approaches, our method leverages KANs to analyze local image structures and selectively control noise distribution, ensuring that critical details are preserved while effectively enhancing darker regions. By iteratively injecting and removing noise through a structure-aware diffusion mechanism, our model progressively refines image features, achieving stable and high-fidelity restoration. Extensive experiments on multiple low-light datasets demonstrate that our method achieves 20.31 dB PSNR and 0.137 LPIPS on the LOL-v2 dataset, outperforming state-of-the-art methods such as EnlightenGAN and PairLIE. Moreover, our model maintains high efficiency with only 0.08M parameters and 13.72G FLOPs, making it well-suited for real-world deployment.</div></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"27 \",\"pages\":\"Article 100431\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2025-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S259000562500058X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S259000562500058X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Diffusion-based low-light image enhancement with Kolmogorov-Arnold Networks (KANs)
Low-light image enhancement is a fundamental task in computer vision, playing a critical role in applications such as autonomous driving, surveillance, and aerial imaging. However, low-light images often suffer from severe noise, loss of detail, and poor contrast, which degrade visual quality and hinder downstream tasks. Traditional stable diffusion-based enhancement methods apply noise uniformly across the entire image during the denoising process, leading to unnecessary detail degradation in texture-rich areas. To address this limitation, we propose an adaptive noise modulation framework that integrates Kolmogorov-Arnold Networks (KANs) into the diffusion process. Unlike conventional approaches, our method leverages KANs to analyze local image structures and selectively control noise distribution, ensuring that critical details are preserved while effectively enhancing darker regions. By iteratively injecting and removing noise through a structure-aware diffusion mechanism, our model progressively refines image features, achieving stable and high-fidelity restoration. Extensive experiments on multiple low-light datasets demonstrate that our method achieves 20.31 dB PSNR and 0.137 LPIPS on the LOL-v2 dataset, outperforming state-of-the-art methods such as EnlightenGAN and PairLIE. Moreover, our model maintains high efficiency with only 0.08M parameters and 13.72G FLOPs, making it well-suited for real-world deployment.