Multiclass skin lesion classification and localziation from dermoscopic images using a novel network-level fused deep architecture and explainable artificial intelligence.

IF 3.8 3区 医学 Q2 MEDICAL INFORMATICS
Mehak Arshad, Muhammad Attique Khan, Nouf Abdullah Almujally, Areej Alasiry, Mehrez Marzougui, Yunyoung Nam
{"title":"Multiclass skin lesion classification and localziation from dermoscopic images using a novel network-level fused deep architecture and explainable artificial intelligence.","authors":"Mehak Arshad, Muhammad Attique Khan, Nouf Abdullah Almujally, Areej Alasiry, Mehrez Marzougui, Yunyoung Nam","doi":"10.1186/s12911-025-03051-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and objective: </strong>Early detection and classification of skin cancer are critical for improving patient outcomes. Dermoscopic image analysis using Computer-Aided Diagnostics (CAD) is a powerful tool to assist dermatologists in identifying and classifying skin lesions. Traditional machine learning models require extensive feature engineering, which is time-consuming and less effective in handling complex data like skin lesions. This study proposes a deep learning-based network-level fusion architecture that integrates multiple deep models to enhance the classification and localization of skin lesions in dermoscopic images. The goal is to address challenges like irregular lesion shapes, inter-class similarities, and class imbalances while providing explainability through artificial intelligence.</p><p><strong>Methods: </strong>A novel hybrid contrast enhancement technique was applied for pre-processing and dataset augmentation. Two deep learning models, a 5-block inverted residual network and a 6-block inverted bottleneck network, were designed and fused at the network level using a depth concatenation approach. The models were trained using Bayesian optimization for hyperparameter tuning. Feature extraction was performed with a global average pooling layer, and shallow neural networks were used for final classification. Explainable AI techniques, including LIME, were used to interpret model predictions and localize lesion regions. Experiments were conducted on two publicly available datasets, HAM10000 and ISIC2018, which were split into training and testing sets.</p><p><strong>Results: </strong>The proposed fused architecture achieved high classification accuracy, with results of 91.3% and 90.7% on the HAM10000 and ISIC2018 datasets, respectively. Sensitivity, precision, and F1-scores were significantly improved after data augmentation, with precision rates of up to 90.91%. The explainable AI component effectively localized lesion areas with high confidence, enhancing the model's interpretability.</p><p><strong>Conclusions: </strong>The network-level fusion architecture combined with explainable AI techniques significantly improved the classification and localization of skin lesions. The augmentation and contrast enhancement processes enhanced lesion visibility, while fusion of models optimized classification accuracy. This approach shows potential for implementation in CAD systems for skin cancer diagnosis, although future work is required to address the limitations of computational resource requirements and training time.</p><p><strong>Clinical trail number: </strong>Not applicable.</p>","PeriodicalId":9340,"journal":{"name":"BMC Medical Informatics and Decision Making","volume":"25 1","pages":"215"},"PeriodicalIF":3.8000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12211947/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Informatics and Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12911-025-03051-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background and objective: Early detection and classification of skin cancer are critical for improving patient outcomes. Dermoscopic image analysis using Computer-Aided Diagnostics (CAD) is a powerful tool to assist dermatologists in identifying and classifying skin lesions. Traditional machine learning models require extensive feature engineering, which is time-consuming and less effective in handling complex data like skin lesions. This study proposes a deep learning-based network-level fusion architecture that integrates multiple deep models to enhance the classification and localization of skin lesions in dermoscopic images. The goal is to address challenges like irregular lesion shapes, inter-class similarities, and class imbalances while providing explainability through artificial intelligence.

Methods: A novel hybrid contrast enhancement technique was applied for pre-processing and dataset augmentation. Two deep learning models, a 5-block inverted residual network and a 6-block inverted bottleneck network, were designed and fused at the network level using a depth concatenation approach. The models were trained using Bayesian optimization for hyperparameter tuning. Feature extraction was performed with a global average pooling layer, and shallow neural networks were used for final classification. Explainable AI techniques, including LIME, were used to interpret model predictions and localize lesion regions. Experiments were conducted on two publicly available datasets, HAM10000 and ISIC2018, which were split into training and testing sets.

Results: The proposed fused architecture achieved high classification accuracy, with results of 91.3% and 90.7% on the HAM10000 and ISIC2018 datasets, respectively. Sensitivity, precision, and F1-scores were significantly improved after data augmentation, with precision rates of up to 90.91%. The explainable AI component effectively localized lesion areas with high confidence, enhancing the model's interpretability.

Conclusions: The network-level fusion architecture combined with explainable AI techniques significantly improved the classification and localization of skin lesions. The augmentation and contrast enhancement processes enhanced lesion visibility, while fusion of models optimized classification accuracy. This approach shows potential for implementation in CAD systems for skin cancer diagnosis, although future work is required to address the limitations of computational resource requirements and training time.

Clinical trail number: Not applicable.

利用新颖的网络级融合深度架构和可解释的人工智能,从皮肤镜图像中进行多类皮肤病变分类和定位。
背景与目的:皮肤癌的早期发现和分类是改善患者预后的关键。使用计算机辅助诊断(CAD)进行皮肤镜图像分析是帮助皮肤科医生识别和分类皮肤病变的强大工具。传统的机器学习模型需要大量的特征工程,这在处理皮肤病变等复杂数据时既耗时又效率低下。本研究提出了一种基于深度学习的网络级融合架构,该架构集成了多个深度模型,以增强皮肤镜图像中皮肤病变的分类和定位。目标是在通过人工智能提供可解释性的同时,解决诸如不规则病变形状、类间相似性和类不平衡等挑战。方法:采用一种新的混合对比度增强技术对数据集进行预处理和增强。设计了两个深度学习模型,一个5块倒立残差网络和一个6块倒立瓶颈网络,并使用深度连接方法在网络级进行融合。模型的训练采用贝叶斯优化进行超参数整定。使用全局平均池化层进行特征提取,并使用浅神经网络进行最终分类。可解释的人工智能技术,包括LIME,被用来解释模型预测和定位病变区域。实验在两个公开的数据集HAM10000和ISIC2018上进行,这两个数据集分为训练集和测试集。结果:所提出的融合架构取得了较高的分类准确率,在HAM10000和ISIC2018数据集上的分类准确率分别为91.3%和90.7%。数据增强后灵敏度、精密度和f1评分均有显著提高,准确率可达90.91%。可解释AI组件有效定位病变区域,置信度高,增强了模型的可解释性。结论:网络级融合架构结合可解释的人工智能技术显著提高了皮肤病变的分类和定位。增强和对比增强过程增强了病灶的可见性,而模型融合优化了分类精度。这种方法显示了在CAD系统中用于皮肤癌诊断的潜力,尽管未来的工作需要解决计算资源需求和培训时间的限制。临床试验号:不适用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
5.70%
发文量
297
审稿时长
1 months
期刊介绍: BMC Medical Informatics and Decision Making is an open access journal publishing original peer-reviewed research articles in relation to the design, development, implementation, use, and evaluation of health information technologies and decision-making for human health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信