Multiclass skin lesion classification and localziation from dermoscopic images using a novel network-level fused deep architecture and explainable artificial intelligence.
Mehak Arshad, Muhammad Attique Khan, Nouf Abdullah Almujally, Areej Alasiry, Mehrez Marzougui, Yunyoung Nam
{"title":"Multiclass skin lesion classification and localziation from dermoscopic images using a novel network-level fused deep architecture and explainable artificial intelligence.","authors":"Mehak Arshad, Muhammad Attique Khan, Nouf Abdullah Almujally, Areej Alasiry, Mehrez Marzougui, Yunyoung Nam","doi":"10.1186/s12911-025-03051-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and objective: </strong>Early detection and classification of skin cancer are critical for improving patient outcomes. Dermoscopic image analysis using Computer-Aided Diagnostics (CAD) is a powerful tool to assist dermatologists in identifying and classifying skin lesions. Traditional machine learning models require extensive feature engineering, which is time-consuming and less effective in handling complex data like skin lesions. This study proposes a deep learning-based network-level fusion architecture that integrates multiple deep models to enhance the classification and localization of skin lesions in dermoscopic images. The goal is to address challenges like irregular lesion shapes, inter-class similarities, and class imbalances while providing explainability through artificial intelligence.</p><p><strong>Methods: </strong>A novel hybrid contrast enhancement technique was applied for pre-processing and dataset augmentation. Two deep learning models, a 5-block inverted residual network and a 6-block inverted bottleneck network, were designed and fused at the network level using a depth concatenation approach. The models were trained using Bayesian optimization for hyperparameter tuning. Feature extraction was performed with a global average pooling layer, and shallow neural networks were used for final classification. Explainable AI techniques, including LIME, were used to interpret model predictions and localize lesion regions. Experiments were conducted on two publicly available datasets, HAM10000 and ISIC2018, which were split into training and testing sets.</p><p><strong>Results: </strong>The proposed fused architecture achieved high classification accuracy, with results of 91.3% and 90.7% on the HAM10000 and ISIC2018 datasets, respectively. Sensitivity, precision, and F1-scores were significantly improved after data augmentation, with precision rates of up to 90.91%. The explainable AI component effectively localized lesion areas with high confidence, enhancing the model's interpretability.</p><p><strong>Conclusions: </strong>The network-level fusion architecture combined with explainable AI techniques significantly improved the classification and localization of skin lesions. The augmentation and contrast enhancement processes enhanced lesion visibility, while fusion of models optimized classification accuracy. This approach shows potential for implementation in CAD systems for skin cancer diagnosis, although future work is required to address the limitations of computational resource requirements and training time.</p><p><strong>Clinical trail number: </strong>Not applicable.</p>","PeriodicalId":9340,"journal":{"name":"BMC Medical Informatics and Decision Making","volume":"25 1","pages":"215"},"PeriodicalIF":3.8000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12211947/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Informatics and Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12911-025-03051-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
Background and objective: Early detection and classification of skin cancer are critical for improving patient outcomes. Dermoscopic image analysis using Computer-Aided Diagnostics (CAD) is a powerful tool to assist dermatologists in identifying and classifying skin lesions. Traditional machine learning models require extensive feature engineering, which is time-consuming and less effective in handling complex data like skin lesions. This study proposes a deep learning-based network-level fusion architecture that integrates multiple deep models to enhance the classification and localization of skin lesions in dermoscopic images. The goal is to address challenges like irregular lesion shapes, inter-class similarities, and class imbalances while providing explainability through artificial intelligence.
Methods: A novel hybrid contrast enhancement technique was applied for pre-processing and dataset augmentation. Two deep learning models, a 5-block inverted residual network and a 6-block inverted bottleneck network, were designed and fused at the network level using a depth concatenation approach. The models were trained using Bayesian optimization for hyperparameter tuning. Feature extraction was performed with a global average pooling layer, and shallow neural networks were used for final classification. Explainable AI techniques, including LIME, were used to interpret model predictions and localize lesion regions. Experiments were conducted on two publicly available datasets, HAM10000 and ISIC2018, which were split into training and testing sets.
Results: The proposed fused architecture achieved high classification accuracy, with results of 91.3% and 90.7% on the HAM10000 and ISIC2018 datasets, respectively. Sensitivity, precision, and F1-scores were significantly improved after data augmentation, with precision rates of up to 90.91%. The explainable AI component effectively localized lesion areas with high confidence, enhancing the model's interpretability.
Conclusions: The network-level fusion architecture combined with explainable AI techniques significantly improved the classification and localization of skin lesions. The augmentation and contrast enhancement processes enhanced lesion visibility, while fusion of models optimized classification accuracy. This approach shows potential for implementation in CAD systems for skin cancer diagnosis, although future work is required to address the limitations of computational resource requirements and training time.
期刊介绍:
BMC Medical Informatics and Decision Making is an open access journal publishing original peer-reviewed research articles in relation to the design, development, implementation, use, and evaluation of health information technologies and decision-making for human health.