Mohammed Y. Abbass, H. Kasban, Zeinab F. Elsharkawy
{"title":"Low-light image enhancement via improved lightweight YUV attention network","authors":"Mohammed Y. Abbass, H. Kasban, Zeinab F. Elsharkawy","doi":"10.1016/j.cag.2025.104170","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning approaches have notable results in the area of computer vision applications. Our paper presents improved LYT-Net, a Lightweight YUV Transformer-based models, as an innovative method to improve low-light scenes. Unlike traditional Retinex-based methods, the proposed framework utilizes the chrominance (U and V) and luminance (Y) channels in YUV color-space, mitigating the complexity between color details and light in scenes. LYT-Net provides a thorough contextual realization of the image while keeping architecture burdens low. In order to tackle the issue of weak feature generation of traditional Channel-wise Denoiser (CWD) Block, improved CWD is proposed using Triplet Attention network. Triplet Attention network is exploited to capture both dynamics and static features. Qualitative and quantitative experiments demonstrate that the proposed technique effectively addresses images with varying exposure levels and outperforms state-of-the-art techniques. Furthermore, the proposed technique shows faster computational performance compared to other Retinex-based techniques, promoting it as a suitable option for real-time computer vision topics.</div><div>The source code is available at <span><span>https://github.com/Mohammed-Abbass/YUV-Attention</span><svg><path></path></svg></span></div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"127 ","pages":"Article 104170"},"PeriodicalIF":2.5000,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000093","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning approaches have notable results in the area of computer vision applications. Our paper presents improved LYT-Net, a Lightweight YUV Transformer-based models, as an innovative method to improve low-light scenes. Unlike traditional Retinex-based methods, the proposed framework utilizes the chrominance (U and V) and luminance (Y) channels in YUV color-space, mitigating the complexity between color details and light in scenes. LYT-Net provides a thorough contextual realization of the image while keeping architecture burdens low. In order to tackle the issue of weak feature generation of traditional Channel-wise Denoiser (CWD) Block, improved CWD is proposed using Triplet Attention network. Triplet Attention network is exploited to capture both dynamics and static features. Qualitative and quantitative experiments demonstrate that the proposed technique effectively addresses images with varying exposure levels and outperforms state-of-the-art techniques. Furthermore, the proposed technique shows faster computational performance compared to other Retinex-based techniques, promoting it as a suitable option for real-time computer vision topics.
The source code is available at https://github.com/Mohammed-Abbass/YUV-Attention
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.