Yan Chang, Jiajin Liu, Shuwei Sun, Tong Chen, Ruimin Wang
{"title":"Deep learning for Parkinson's disease classification using multimodal and multi-sequences PET/MR images.","authors":"Yan Chang, Jiajin Liu, Shuwei Sun, Tong Chen, Ruimin Wang","doi":"10.1186/s13550-025-01245-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>We aimed to use deep learning (DL) techniques to accurately differentiate Parkinson's disease (PD) from multiple system atrophy (MSA), which share similar clinical presentations. In this retrospective analysis, 206 patients who underwent PET/MR imaging at the Chinese PLA General Hospital were included, having been clinically diagnosed with either PD or MSA; an additional 38 healthy volunteers served as normal controls (NC). All subjects were randomly assigned to the training and test sets at a ratio of 7:3. The input to the model consists of 10 two-dimensional (2D) slices in axial, coronal, and sagittal planes from multi-modal images. A modified Residual Block Network with 18 layers (ResNet18) was trained with different modal images, to classify PD, MSA, and NC. A four-fold cross-validation method was applied in the training set. Performance evaluations included accuracy, precision, recall, F1 score, Receiver operating characteristic (ROC), and area under the ROC curve (AUC).</p><p><strong>Results: </strong>Six single-modal models and seven multi-modal models were trained and tested. The PET models outperformed MRI models. The <sup>11</sup>C-methyl-N-2β-carbomethoxy-3β-(4-fluorophenyl)-tropanel (<sup>11</sup>C-CFT) -Apparent Diffusion Coefficient (ADC) model showed the best classification, which resulted in 0.97 accuracy, 0.93 precision, 0.95 recall, 0.92 F1, and 0.96 AUC. In the test set, the accuracy, precision, recall, and F1 score of the CFT-ADC model were 0.70, 0.73, 0.93, and 0.82, respectively.</p><p><strong>Conclusions: </strong>The proposed DL method shows potential as a high-performance assisting tool for the accurate diagnosis of PD and MSA. A multi-modal and multi-sequence model could further enhance the ability to classify PD.</p>","PeriodicalId":11611,"journal":{"name":"EJNMMI Research","volume":"15 1","pages":"55"},"PeriodicalIF":3.1000,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12064532/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EJNMMI Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13550-025-01245-3","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Background: We aimed to use deep learning (DL) techniques to accurately differentiate Parkinson's disease (PD) from multiple system atrophy (MSA), which share similar clinical presentations. In this retrospective analysis, 206 patients who underwent PET/MR imaging at the Chinese PLA General Hospital were included, having been clinically diagnosed with either PD or MSA; an additional 38 healthy volunteers served as normal controls (NC). All subjects were randomly assigned to the training and test sets at a ratio of 7:3. The input to the model consists of 10 two-dimensional (2D) slices in axial, coronal, and sagittal planes from multi-modal images. A modified Residual Block Network with 18 layers (ResNet18) was trained with different modal images, to classify PD, MSA, and NC. A four-fold cross-validation method was applied in the training set. Performance evaluations included accuracy, precision, recall, F1 score, Receiver operating characteristic (ROC), and area under the ROC curve (AUC).
Results: Six single-modal models and seven multi-modal models were trained and tested. The PET models outperformed MRI models. The 11C-methyl-N-2β-carbomethoxy-3β-(4-fluorophenyl)-tropanel (11C-CFT) -Apparent Diffusion Coefficient (ADC) model showed the best classification, which resulted in 0.97 accuracy, 0.93 precision, 0.95 recall, 0.92 F1, and 0.96 AUC. In the test set, the accuracy, precision, recall, and F1 score of the CFT-ADC model were 0.70, 0.73, 0.93, and 0.82, respectively.
Conclusions: The proposed DL method shows potential as a high-performance assisting tool for the accurate diagnosis of PD and MSA. A multi-modal and multi-sequence model could further enhance the ability to classify PD.
EJNMMI ResearchRADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING&nb-
CiteScore
5.90
自引率
3.10%
发文量
72
审稿时长
13 weeks
期刊介绍:
EJNMMI Research publishes new basic, translational and clinical research in the field of nuclear medicine and molecular imaging. Regular features include original research articles, rapid communication of preliminary data on innovative research, interesting case reports, editorials, and letters to the editor. Educational articles on basic sciences, fundamental aspects and controversy related to pre-clinical and clinical research or ethical aspects of research are also welcome. Timely reviews provide updates on current applications, issues in imaging research and translational aspects of nuclear medicine and molecular imaging technologies.
The main emphasis is placed on the development of targeted imaging with radiopharmaceuticals within the broader context of molecular probes to enhance understanding and characterisation of the complex biological processes underlying disease and to develop, test and guide new treatment modalities, including radionuclide therapy.