{"title":"基于交叉注意融合和协同边缘计算的多模态皮肤病变分类","authors":"Nhu-Y Tran-Van, Kim-Hung Le","doi":"10.1016/j.compmedimag.2025.102588","DOIUrl":null,"url":null,"abstract":"<div><div>Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102588"},"PeriodicalIF":4.9000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing\",\"authors\":\"Nhu-Y Tran-Van, Kim-Hung Le\",\"doi\":\"10.1016/j.compmedimag.2025.102588\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"124 \",\"pages\":\"Article 102588\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611125000977\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000977","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing
Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.