FVCM-Net:基于可解释的hire - cam归因图和集成学习的CT扫描图像中可解释的隐私保护注意力驱动肺癌检测

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Abu Sayem Md Siam, Md. Mehedi Hasan, Yeasir Arafat, Md Muzadded Chowdhury, Sayed Hossain Jobayer, Fahim Hafiz, Riasat Azim
{"title":"FVCM-Net:基于可解释的hire - cam归因图和集成学习的CT扫描图像中可解释的隐私保护注意力驱动肺癌检测","authors":"Abu Sayem Md Siam,&nbsp;Md. Mehedi Hasan,&nbsp;Yeasir Arafat,&nbsp;Md Muzadded Chowdhury,&nbsp;Sayed Hossain Jobayer,&nbsp;Fahim Hafiz,&nbsp;Riasat Azim","doi":"10.1016/j.bspc.2025.108719","DOIUrl":null,"url":null,"abstract":"<div><div>Lung cancer is a predominant cause of cancer-related deaths globally, and early detection is essential for improving patient prognosis. Deep learning models with attention mechanisms have shown promising accuracy in detecting lung cancer from medical imaging data. However, privacy concerns and data scarcity present significant challenges in developing robust and generalizable models. This paper proposes a novel approach for lung cancer detection, ‘FVCM-Net’, integrating federated learning with attention mechanisms and ensemble learning to address these challenges. Federated learning is employed to train the model across multiple decentralized institutions, allowing for collaborative model development without sharing sensitive patient data and minimizing the risk of such sensitive data being misused. Furthermore, this approach enables the development of more accurate and generalized models by leveraging diverse datasets from multiple sources. We employed ensemble learning to produce more accurate predictions than a single model. For interpretability of the lung cancer identification model, we employ XAI (Explainable Artificial Intelligence) techniques such as SHAP (SHapley Additive exPlanations) and HiResCAM (High-Resolution Class Activation Mapping). These techniques help us understand how the model makes its decisions and predictions. This study utilizes a diverse collection of lung CT scan images from four datasets, including LIDC-IDRI, IQ-OTH/NCCD, a public Kaggle dataset, and additional online sources. Experimental results revealed that the proposed method achieved higher performance in lung cancer detection with 98.26% average accuracy and 97.37% average F-1 score. The high performance of FVCM-Net and ensemble learning has the potential to significantly impact medical imaging, helping radiologists make better clinical decisions.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"112 ","pages":"Article 108719"},"PeriodicalIF":4.9000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FVCM-Net: Interpretable privacy-preserved attention driven lung cancer detection from CT scan images with explainable HiRes-CAM attribution map and ensemble learning\",\"authors\":\"Abu Sayem Md Siam,&nbsp;Md. Mehedi Hasan,&nbsp;Yeasir Arafat,&nbsp;Md Muzadded Chowdhury,&nbsp;Sayed Hossain Jobayer,&nbsp;Fahim Hafiz,&nbsp;Riasat Azim\",\"doi\":\"10.1016/j.bspc.2025.108719\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Lung cancer is a predominant cause of cancer-related deaths globally, and early detection is essential for improving patient prognosis. Deep learning models with attention mechanisms have shown promising accuracy in detecting lung cancer from medical imaging data. However, privacy concerns and data scarcity present significant challenges in developing robust and generalizable models. This paper proposes a novel approach for lung cancer detection, ‘FVCM-Net’, integrating federated learning with attention mechanisms and ensemble learning to address these challenges. Federated learning is employed to train the model across multiple decentralized institutions, allowing for collaborative model development without sharing sensitive patient data and minimizing the risk of such sensitive data being misused. Furthermore, this approach enables the development of more accurate and generalized models by leveraging diverse datasets from multiple sources. We employed ensemble learning to produce more accurate predictions than a single model. For interpretability of the lung cancer identification model, we employ XAI (Explainable Artificial Intelligence) techniques such as SHAP (SHapley Additive exPlanations) and HiResCAM (High-Resolution Class Activation Mapping). These techniques help us understand how the model makes its decisions and predictions. This study utilizes a diverse collection of lung CT scan images from four datasets, including LIDC-IDRI, IQ-OTH/NCCD, a public Kaggle dataset, and additional online sources. Experimental results revealed that the proposed method achieved higher performance in lung cancer detection with 98.26% average accuracy and 97.37% average F-1 score. The high performance of FVCM-Net and ensemble learning has the potential to significantly impact medical imaging, helping radiologists make better clinical decisions.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"112 \",\"pages\":\"Article 108719\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809425012303\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425012303","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

肺癌是全球癌症相关死亡的主要原因,早期发现对于改善患者预后至关重要。具有注意机制的深度学习模型在从医学影像数据中检测肺癌方面显示出了良好的准确性。然而,隐私问题和数据稀缺性给开发健壮和可推广的模型带来了重大挑战。本文提出了一种新的肺癌检测方法,“FVCM-Net”,将联邦学习与注意机制和集成学习相结合,以解决这些挑战。联邦学习用于跨多个分散的机构训练模型,允许在不共享敏感患者数据的情况下进行协作模型开发,并将此类敏感数据被滥用的风险降至最低。此外,通过利用来自多个来源的不同数据集,这种方法能够开发更准确和通用的模型。我们采用集成学习来产生比单一模型更准确的预测。对于肺癌识别模型的可解释性,我们采用了XAI(可解释人工智能)技术,如SHAP (SHapley Additive explanation)和HiResCAM(高分辨率类激活映射)。这些技术帮助我们理解模型是如何做出决策和预测的。本研究利用了来自四个数据集的各种肺部CT扫描图像,包括LIDC-IDRI, IQ-OTH/NCCD,公共Kaggle数据集和其他在线资源。实验结果表明,该方法在肺癌检测中取得了较高的性能,平均准确率为98.26%,平均F-1评分为97.37%。FVCM-Net的高性能和集成学习有可能对医学成像产生重大影响,帮助放射科医生做出更好的临床决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FVCM-Net: Interpretable privacy-preserved attention driven lung cancer detection from CT scan images with explainable HiRes-CAM attribution map and ensemble learning
Lung cancer is a predominant cause of cancer-related deaths globally, and early detection is essential for improving patient prognosis. Deep learning models with attention mechanisms have shown promising accuracy in detecting lung cancer from medical imaging data. However, privacy concerns and data scarcity present significant challenges in developing robust and generalizable models. This paper proposes a novel approach for lung cancer detection, ‘FVCM-Net’, integrating federated learning with attention mechanisms and ensemble learning to address these challenges. Federated learning is employed to train the model across multiple decentralized institutions, allowing for collaborative model development without sharing sensitive patient data and minimizing the risk of such sensitive data being misused. Furthermore, this approach enables the development of more accurate and generalized models by leveraging diverse datasets from multiple sources. We employed ensemble learning to produce more accurate predictions than a single model. For interpretability of the lung cancer identification model, we employ XAI (Explainable Artificial Intelligence) techniques such as SHAP (SHapley Additive exPlanations) and HiResCAM (High-Resolution Class Activation Mapping). These techniques help us understand how the model makes its decisions and predictions. This study utilizes a diverse collection of lung CT scan images from four datasets, including LIDC-IDRI, IQ-OTH/NCCD, a public Kaggle dataset, and additional online sources. Experimental results revealed that the proposed method achieved higher performance in lung cancer detection with 98.26% average accuracy and 97.37% average F-1 score. The high performance of FVCM-Net and ensemble learning has the potential to significantly impact medical imaging, helping radiologists make better clinical decisions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical Signal Processing and Control
Biomedical Signal Processing and Control 工程技术-工程:生物医学
CiteScore
9.80
自引率
13.70%
发文量
822
审稿时长
4 months
期刊介绍: Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management. Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信