{"title":"多模态2.5d卷积神经网络在磁共振成像和正电子发射断层扫描诊断阿尔茨海默病中的应用","authors":"Xuyang Zhang, Weiming Lin, Min Xiao, Huazhi Ji","doi":"10.2528/pier21051102","DOIUrl":null,"url":null,"abstract":"Alzheimer’s disease (AD) is a degenerative disease of the nervous system that often occurs in the elderly. As magnetic resonance imaging (MRI) and positron emission tomography (PET) reflect the brain’s anatomical changes and functional changes caused by AD, they are often used to diagnose AD. Multimodal fusion based on these two types of images can effectively utilize complementary information and improve diagnostic performance. To avoid the computational complexity of the 3D image and expand training samples, this study designed an AD diagnosis framework based on a 2.5D convolutional neural network (CNN) to fuse multimodal data. First, MRI and PET were preprocessed with skull stripping and registration. After that, multiple 2.5D patches were extracted within the hippocampus regions from both MRI and PET. Then, we constructed a multimodal 2.5D CNN to integrate the multimodal information fromMRI and PET patches. We also utilized a training strategy called branches pre-training to enhance the feature extraction ability of the 2.5D CNN by pre-training two branches with corresponding modalities individually. Finally, the results of patches are used to diagnose AD and progressive mild cognitive impairment (pMCI) patients from normal controls (NC). The experiments were conducted with the ADNI dataset, and accuracies of 92.89% and 84.07% were achieved in the AD vs. NC and pMCI vs. NC tasks. The results are much better than using single modality and indicate that the proposed multimodal 2.5D CNN could effectively integrate complementary information from multi-modality and yield a promising AD diagnosis performance.","PeriodicalId":90705,"journal":{"name":"Progress in Electromagnetics Research Symposium : [proceedings]. Progress in Electromagnetics Research Symposium","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MULTIMODAL 2.5D CONVOLUTIONAL NEURAL NETWORK FOR DIAGNOSIS OF ALZHEIMER'S DISEASE WITH MAGNETIC RESONANCE IMAGING AND POSITRON EMISSION TOMOGRAPHY\",\"authors\":\"Xuyang Zhang, Weiming Lin, Min Xiao, Huazhi Ji\",\"doi\":\"10.2528/pier21051102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Alzheimer’s disease (AD) is a degenerative disease of the nervous system that often occurs in the elderly. As magnetic resonance imaging (MRI) and positron emission tomography (PET) reflect the brain’s anatomical changes and functional changes caused by AD, they are often used to diagnose AD. Multimodal fusion based on these two types of images can effectively utilize complementary information and improve diagnostic performance. To avoid the computational complexity of the 3D image and expand training samples, this study designed an AD diagnosis framework based on a 2.5D convolutional neural network (CNN) to fuse multimodal data. First, MRI and PET were preprocessed with skull stripping and registration. After that, multiple 2.5D patches were extracted within the hippocampus regions from both MRI and PET. Then, we constructed a multimodal 2.5D CNN to integrate the multimodal information fromMRI and PET patches. We also utilized a training strategy called branches pre-training to enhance the feature extraction ability of the 2.5D CNN by pre-training two branches with corresponding modalities individually. Finally, the results of patches are used to diagnose AD and progressive mild cognitive impairment (pMCI) patients from normal controls (NC). The experiments were conducted with the ADNI dataset, and accuracies of 92.89% and 84.07% were achieved in the AD vs. NC and pMCI vs. NC tasks. The results are much better than using single modality and indicate that the proposed multimodal 2.5D CNN could effectively integrate complementary information from multi-modality and yield a promising AD diagnosis performance.\",\"PeriodicalId\":90705,\"journal\":{\"name\":\"Progress in Electromagnetics Research Symposium : [proceedings]. Progress in Electromagnetics Research Symposium\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Progress in Electromagnetics Research Symposium : [proceedings]. Progress in Electromagnetics Research Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2528/pier21051102\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Progress in Electromagnetics Research Symposium : [proceedings]. Progress in Electromagnetics Research Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2528/pier21051102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MULTIMODAL 2.5D CONVOLUTIONAL NEURAL NETWORK FOR DIAGNOSIS OF ALZHEIMER'S DISEASE WITH MAGNETIC RESONANCE IMAGING AND POSITRON EMISSION TOMOGRAPHY
Alzheimer’s disease (AD) is a degenerative disease of the nervous system that often occurs in the elderly. As magnetic resonance imaging (MRI) and positron emission tomography (PET) reflect the brain’s anatomical changes and functional changes caused by AD, they are often used to diagnose AD. Multimodal fusion based on these two types of images can effectively utilize complementary information and improve diagnostic performance. To avoid the computational complexity of the 3D image and expand training samples, this study designed an AD diagnosis framework based on a 2.5D convolutional neural network (CNN) to fuse multimodal data. First, MRI and PET were preprocessed with skull stripping and registration. After that, multiple 2.5D patches were extracted within the hippocampus regions from both MRI and PET. Then, we constructed a multimodal 2.5D CNN to integrate the multimodal information fromMRI and PET patches. We also utilized a training strategy called branches pre-training to enhance the feature extraction ability of the 2.5D CNN by pre-training two branches with corresponding modalities individually. Finally, the results of patches are used to diagnose AD and progressive mild cognitive impairment (pMCI) patients from normal controls (NC). The experiments were conducted with the ADNI dataset, and accuracies of 92.89% and 84.07% were achieved in the AD vs. NC and pMCI vs. NC tasks. The results are much better than using single modality and indicate that the proposed multimodal 2.5D CNN could effectively integrate complementary information from multi-modality and yield a promising AD diagnosis performance.