Fusion of metadata and dermoscopic images for melanoma detection: Deep learning and feature importance analysis

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Misbah Ahmad , Imran Ahmed , Abdellah Chehri , Gwangill Jeon
{"title":"Fusion of metadata and dermoscopic images for melanoma detection: Deep learning and feature importance analysis","authors":"Misbah Ahmad ,&nbsp;Imran Ahmed ,&nbsp;Abdellah Chehri ,&nbsp;Gwangill Jeon","doi":"10.1016/j.inffus.2025.103304","DOIUrl":null,"url":null,"abstract":"<div><div>In the era of smart healthcare, integrating multimodal data is essential for improving diagnostic accuracy and enabling personalized care. This study presented a deep learning-based multimodal approach for melanoma detection, leveraging both dermoscopic images and clinical metadata to enhance classification performance. The proposed model integrated a multi-layer convolutional neural network (CNN) to extract image features and combined them with structured metadata, including patient age, gender, and lesion location, through feature-level fusion. The fusion process occurred at the final CNN layer, where high-dimensional image feature vectors were concatenated with processed metadata. The metadata was handled separately through a fully connected neural network comprising multiple dense layers. The final fused representation was passed through additional dense layers, culminating in a classification layer that outputted the probability of melanoma presence. The model was trained end-to-end using the SIIM-ISIC dataset, allowing it to learn a joint representation of image and metadata features for optimal classification. Various data augmentation techniques were applied to dermoscopic images to mitigate class imbalance and improve model robustness. Additionally, exploratory data analysis (EDA) and feature importance analysis were conducted to assess the contribution of each metadata feature to the overall classification. Our fusion-based deep learning architecture outperformed single-modality models, boosting classification accuracy. The presented model achieved an accuracy of 94.5% and an overall F1-score of 0.94, validating its effectiveness in melanoma detection. This study aims to highlight the potential of deep learning-based multimodal fusion in enhancing diagnostic precision, offering a scalable and reliable solution for improved melanoma detection in smart healthcare systems.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103304"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156625352500377X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In the era of smart healthcare, integrating multimodal data is essential for improving diagnostic accuracy and enabling personalized care. This study presented a deep learning-based multimodal approach for melanoma detection, leveraging both dermoscopic images and clinical metadata to enhance classification performance. The proposed model integrated a multi-layer convolutional neural network (CNN) to extract image features and combined them with structured metadata, including patient age, gender, and lesion location, through feature-level fusion. The fusion process occurred at the final CNN layer, where high-dimensional image feature vectors were concatenated with processed metadata. The metadata was handled separately through a fully connected neural network comprising multiple dense layers. The final fused representation was passed through additional dense layers, culminating in a classification layer that outputted the probability of melanoma presence. The model was trained end-to-end using the SIIM-ISIC dataset, allowing it to learn a joint representation of image and metadata features for optimal classification. Various data augmentation techniques were applied to dermoscopic images to mitigate class imbalance and improve model robustness. Additionally, exploratory data analysis (EDA) and feature importance analysis were conducted to assess the contribution of each metadata feature to the overall classification. Our fusion-based deep learning architecture outperformed single-modality models, boosting classification accuracy. The presented model achieved an accuracy of 94.5% and an overall F1-score of 0.94, validating its effectiveness in melanoma detection. This study aims to highlight the potential of deep learning-based multimodal fusion in enhancing diagnostic precision, offering a scalable and reliable solution for improved melanoma detection in smart healthcare systems.
黑色素瘤检测的元数据和皮肤镜图像融合:深度学习和特征重要性分析
在智能医疗时代,集成多模式数据对于提高诊断准确性和实现个性化护理至关重要。本研究提出了一种基于深度学习的多模式黑色素瘤检测方法,利用皮肤镜图像和临床元数据来提高分类性能。该模型集成了多层卷积神经网络(CNN)提取图像特征,并通过特征级融合将其与结构化元数据(包括患者年龄、性别和病变位置)相结合。融合过程发生在最后一层CNN,高维图像特征向量与处理后的元数据连接在一起。元数据通过由多个密集层组成的全连接神经网络单独处理。最后的融合表示通过额外的密集层,最终形成一个分类层,输出黑色素瘤存在的概率。该模型使用SIIM-ISIC数据集进行端到端训练,使其能够学习图像和元数据特征的联合表示,以实现最佳分类。各种数据增强技术应用于皮肤镜图像,以减轻类不平衡和提高模型的鲁棒性。此外,还进行了探索性数据分析(EDA)和特征重要性分析,以评估每个元数据特征对总体分类的贡献。我们基于融合的深度学习架构优于单模态模型,提高了分类精度。该模型的准确率为94.5%,总体f1评分为0.94,验证了其在黑色素瘤检测中的有效性。本研究旨在强调基于深度学习的多模态融合在提高诊断精度方面的潜力,为智能医疗系统中改进的黑色素瘤检测提供可扩展和可靠的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信