A Comprehensive Deep Learning Approach for Dermoscopic Image Enhancement

Abdullah Al Mazed , Md. Faiyaj Ahmed Limon , Shahidul Haque Thouhid , Md Fazle Hasan Shiblee , Shubradeb Das , Md. Shahid Iqbal , Debojyoti Biswas
{"title":"A Comprehensive Deep Learning Approach for Dermoscopic Image Enhancement","authors":"Abdullah Al Mazed ,&nbsp;Md. Faiyaj Ahmed Limon ,&nbsp;Shahidul Haque Thouhid ,&nbsp;Md Fazle Hasan Shiblee ,&nbsp;Shubradeb Das ,&nbsp;Md. Shahid Iqbal ,&nbsp;Debojyoti Biswas","doi":"10.1016/j.fraope.2025.100405","DOIUrl":null,"url":null,"abstract":"<div><div>Image enhancement plays a pivotal role in improving image quality within the field of image processing. In the context of dermoscopic imaging, it serves as a critical and challenging pre-processing step, essential for facilitating accurate automated diagnosis. However, current techniques often struggle to address the diverse range of degradations encountered in real-world scenarios. The primary objective of this study is to propose a robust deep learning approach capable of restoring high-quality images from a wide range of realistic degradation scenarios. To achieve this, we introduce two key contributions: first, EnhanceNet-U, a U-Net architecture modified with a simplified bottleneck and an enhanced decoder path for improved feature restoration; and second, a comprehensive synthetic dataset simulating common dermoscopic degradations, including Gaussian noise, variations in brightness and contrast, and blur. Extensive experiments were conducted, evaluating our proposed method against several established baseline models and analyzing the impact of various loss functions and optimizers to determine the optimal configuration. The results show that EnhanceNet-U consistently outperformed all competing models, demonstrating a peak improvement of 15.75% in SSIM and 15% in PSNR when compared to the lowest-performing DnCNN model. The combination of perceptual loss and MSE emerged as the most effective loss function for balancing quantitative accuracy with perceptual quality. These findings validate our proposed method, proving its capability to restore high-quality images under realistic degradation scenarios and highlighting its potential as a robust solution for the complexities of dermoscopic image enhancement.</div></div>","PeriodicalId":100554,"journal":{"name":"Franklin Open","volume":"13 ","pages":"Article 100405"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Franklin Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2773186325001938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Image enhancement plays a pivotal role in improving image quality within the field of image processing. In the context of dermoscopic imaging, it serves as a critical and challenging pre-processing step, essential for facilitating accurate automated diagnosis. However, current techniques often struggle to address the diverse range of degradations encountered in real-world scenarios. The primary objective of this study is to propose a robust deep learning approach capable of restoring high-quality images from a wide range of realistic degradation scenarios. To achieve this, we introduce two key contributions: first, EnhanceNet-U, a U-Net architecture modified with a simplified bottleneck and an enhanced decoder path for improved feature restoration; and second, a comprehensive synthetic dataset simulating common dermoscopic degradations, including Gaussian noise, variations in brightness and contrast, and blur. Extensive experiments were conducted, evaluating our proposed method against several established baseline models and analyzing the impact of various loss functions and optimizers to determine the optimal configuration. The results show that EnhanceNet-U consistently outperformed all competing models, demonstrating a peak improvement of 15.75% in SSIM and 15% in PSNR when compared to the lowest-performing DnCNN model. The combination of perceptual loss and MSE emerged as the most effective loss function for balancing quantitative accuracy with perceptual quality. These findings validate our proposed method, proving its capability to restore high-quality images under realistic degradation scenarios and highlighting its potential as a robust solution for the complexities of dermoscopic image enhancement.
皮肤镜图像增强的综合深度学习方法
在图像处理领域,图像增强在提高图像质量方面起着举足轻重的作用。在皮肤镜成像的背景下,它是一个关键和具有挑战性的预处理步骤,对于促进准确的自动诊断至关重要。然而,目前的技术往往难以解决现实场景中遇到的各种各样的降级问题。本研究的主要目标是提出一种鲁棒的深度学习方法,能够从广泛的现实退化场景中恢复高质量的图像。为了实现这一目标,我们介绍了两个关键贡献:首先,EnhanceNet-U,一种U-Net架构,改进了简化的瓶颈和增强的解码器路径,以改进特征恢复;第二,一个综合的合成数据集,模拟常见的皮肤退化,包括高斯噪声,亮度和对比度的变化,以及模糊。我们进行了大量的实验,对几种已建立的基线模型评估了我们提出的方法,并分析了各种损失函数和优化器的影响,以确定最优配置。结果表明,EnhanceNet-U始终优于所有竞争模型,与表现最差的DnCNN模型相比,SSIM和PSNR的峰值提高了15.75%和15%。感知损失和MSE的结合成为平衡定量准确性和感知质量的最有效的损失函数。这些发现验证了我们提出的方法,证明了它在现实退化场景下恢复高质量图像的能力,并突出了它作为皮肤镜图像增强复杂性的强大解决方案的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信