Abdullah Al Mazed , Md. Faiyaj Ahmed Limon , Shahidul Haque Thouhid , Md Fazle Hasan Shiblee , Shubradeb Das , Md. Shahid Iqbal , Debojyoti Biswas
{"title":"A Comprehensive Deep Learning Approach for Dermoscopic Image Enhancement","authors":"Abdullah Al Mazed , Md. Faiyaj Ahmed Limon , Shahidul Haque Thouhid , Md Fazle Hasan Shiblee , Shubradeb Das , Md. Shahid Iqbal , Debojyoti Biswas","doi":"10.1016/j.fraope.2025.100405","DOIUrl":null,"url":null,"abstract":"<div><div>Image enhancement plays a pivotal role in improving image quality within the field of image processing. In the context of dermoscopic imaging, it serves as a critical and challenging pre-processing step, essential for facilitating accurate automated diagnosis. However, current techniques often struggle to address the diverse range of degradations encountered in real-world scenarios. The primary objective of this study is to propose a robust deep learning approach capable of restoring high-quality images from a wide range of realistic degradation scenarios. To achieve this, we introduce two key contributions: first, EnhanceNet-U, a U-Net architecture modified with a simplified bottleneck and an enhanced decoder path for improved feature restoration; and second, a comprehensive synthetic dataset simulating common dermoscopic degradations, including Gaussian noise, variations in brightness and contrast, and blur. Extensive experiments were conducted, evaluating our proposed method against several established baseline models and analyzing the impact of various loss functions and optimizers to determine the optimal configuration. The results show that EnhanceNet-U consistently outperformed all competing models, demonstrating a peak improvement of 15.75% in SSIM and 15% in PSNR when compared to the lowest-performing DnCNN model. The combination of perceptual loss and MSE emerged as the most effective loss function for balancing quantitative accuracy with perceptual quality. These findings validate our proposed method, proving its capability to restore high-quality images under realistic degradation scenarios and highlighting its potential as a robust solution for the complexities of dermoscopic image enhancement.</div></div>","PeriodicalId":100554,"journal":{"name":"Franklin Open","volume":"13 ","pages":"Article 100405"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Franklin Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2773186325001938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Image enhancement plays a pivotal role in improving image quality within the field of image processing. In the context of dermoscopic imaging, it serves as a critical and challenging pre-processing step, essential for facilitating accurate automated diagnosis. However, current techniques often struggle to address the diverse range of degradations encountered in real-world scenarios. The primary objective of this study is to propose a robust deep learning approach capable of restoring high-quality images from a wide range of realistic degradation scenarios. To achieve this, we introduce two key contributions: first, EnhanceNet-U, a U-Net architecture modified with a simplified bottleneck and an enhanced decoder path for improved feature restoration; and second, a comprehensive synthetic dataset simulating common dermoscopic degradations, including Gaussian noise, variations in brightness and contrast, and blur. Extensive experiments were conducted, evaluating our proposed method against several established baseline models and analyzing the impact of various loss functions and optimizers to determine the optimal configuration. The results show that EnhanceNet-U consistently outperformed all competing models, demonstrating a peak improvement of 15.75% in SSIM and 15% in PSNR when compared to the lowest-performing DnCNN model. The combination of perceptual loss and MSE emerged as the most effective loss function for balancing quantitative accuracy with perceptual quality. These findings validate our proposed method, proving its capability to restore high-quality images under realistic degradation scenarios and highlighting its potential as a robust solution for the complexities of dermoscopic image enhancement.