Ayush Kanyal, S. Kandula, Vince D. Calhoun, Dong Hye Ye
{"title":"Multi-Modal Deep Learning on Imaging Genetics for Schizophrenia Classification","authors":"Ayush Kanyal, S. Kandula, Vince D. Calhoun, Dong Hye Ye","doi":"10.1109/ICASSPW59220.2023.10193352","DOIUrl":null,"url":null,"abstract":"Schizophrenia (SZ) is a severe, chronic mental condition that impacts one’s capacity to think, act, and interact with others. It has been established that SZ patients have morphological changes in their brains, along with decreased hippocampal and thalamic volume. Also, it is known that patients with SZ have irregular functional brain connectivity. Furthermore, because SZ is a genetic illness, genetic markers such as single nucleotide polymorphisms (SNP) can be useful to characterize SZ patients. We propose an automatic method to detect changes in SZ patients’ brains considering its heterogeneous multi-modal nature. We present a novel deep-learning method to classify SZ subjects with morphological features from structural MRI (sMRI), brain connectivity features from functional MRI (fMRI), and genetic features from SNPs. For sMRI, we used a pre-trained DenseNet to extract convolutional features which encode the morphological changes induced by SZ. For fMRI, we choose the important connections in functional network connection (FNC) matrix by applying layer-wise relevance propagation (LRP). We also detect SZ-linked SNPs using LRP on a pre-trained 1-dimensional convolutional neural network. Combined features from these three modalities are then fed to an extreme gradient boosting (XGBoost) tree classifier for SZ diagnosis. The experiments using the clinical dataset have shown that our multi-modal approach significantly improved SZ classification accuracy compared with uni-modal deep learning methods.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSPW59220.2023.10193352","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Schizophrenia (SZ) is a severe, chronic mental condition that impacts one’s capacity to think, act, and interact with others. It has been established that SZ patients have morphological changes in their brains, along with decreased hippocampal and thalamic volume. Also, it is known that patients with SZ have irregular functional brain connectivity. Furthermore, because SZ is a genetic illness, genetic markers such as single nucleotide polymorphisms (SNP) can be useful to characterize SZ patients. We propose an automatic method to detect changes in SZ patients’ brains considering its heterogeneous multi-modal nature. We present a novel deep-learning method to classify SZ subjects with morphological features from structural MRI (sMRI), brain connectivity features from functional MRI (fMRI), and genetic features from SNPs. For sMRI, we used a pre-trained DenseNet to extract convolutional features which encode the morphological changes induced by SZ. For fMRI, we choose the important connections in functional network connection (FNC) matrix by applying layer-wise relevance propagation (LRP). We also detect SZ-linked SNPs using LRP on a pre-trained 1-dimensional convolutional neural network. Combined features from these three modalities are then fed to an extreme gradient boosting (XGBoost) tree classifier for SZ diagnosis. The experiments using the clinical dataset have shown that our multi-modal approach significantly improved SZ classification accuracy compared with uni-modal deep learning methods.