{"title":"CNNs based multi-modality classification for AD diagnosis","authors":"D. Cheng, Manhua Liu","doi":"10.1109/CISP-BMEI.2017.8302281","DOIUrl":null,"url":null,"abstract":"Accurate and early diagnosis of Alzheimer's disease (AD) plays a significant part for the patient care and development of future treatment. Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) neuroimages are effective modalities that can help physicians to diagnose AD. In past few years, machine-learning algorithm have been widely studied on the analyses for multi-modality neuroimages in quantitation evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft features after image preprocessing such as registration, segmentation and feature extraction, and then train a classifier to distinguish AD from other groups. This paper proposes to construct multi-level convolutional neural networks (CNNs) to gradually learn and combine the multi-modality features for AD classification using MRI and PET images. First, the deep 3D-CNNs are constructed to transform the whole brain information into compact high-level features for each modality. Then, a 2D CNNs is cascaded to ensemble the high-level features for image classification. The proposed method can automatically learn the generic features from MRI and PET imaging data for AD classification. No rigid image registration and segmentation are performed on the brain images. Our proposed method is evaluated on the baseline MRI and PET images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database on 193 subjects including 93 Alzheimer's disease (AD) subjects and 100 normal controls (NC) subjects. Experimental results and comparison show that the proposed method achieves an accuracy of 89.64% for classification of AD vs. NC, demonstrating the promising classification performance.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"26 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"57","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISP-BMEI.2017.8302281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 57
Abstract
Accurate and early diagnosis of Alzheimer's disease (AD) plays a significant part for the patient care and development of future treatment. Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) neuroimages are effective modalities that can help physicians to diagnose AD. In past few years, machine-learning algorithm have been widely studied on the analyses for multi-modality neuroimages in quantitation evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft features after image preprocessing such as registration, segmentation and feature extraction, and then train a classifier to distinguish AD from other groups. This paper proposes to construct multi-level convolutional neural networks (CNNs) to gradually learn and combine the multi-modality features for AD classification using MRI and PET images. First, the deep 3D-CNNs are constructed to transform the whole brain information into compact high-level features for each modality. Then, a 2D CNNs is cascaded to ensemble the high-level features for image classification. The proposed method can automatically learn the generic features from MRI and PET imaging data for AD classification. No rigid image registration and segmentation are performed on the brain images. Our proposed method is evaluated on the baseline MRI and PET images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database on 193 subjects including 93 Alzheimer's disease (AD) subjects and 100 normal controls (NC) subjects. Experimental results and comparison show that the proposed method achieves an accuracy of 89.64% for classification of AD vs. NC, demonstrating the promising classification performance.