{"title":"基于动态异构图卷积神经网络和生成对抗网络的阿尔茨海默病多模态数据融合","authors":"Xiaoyu Chen, Shuaiqun Wang, Wei Kong","doi":"10.1016/j.array.2025.100415","DOIUrl":null,"url":null,"abstract":"<div><div>Alzheimer's disease (AD) is a complex neurodegenerative disorder, and understanding its pathogenic mechanisms is crucial for accurate diagnosis. Current research has progressed from single-modal data analysis to multi-modal data fusion, leveraging deep learning's efficient data analysis capabilities to handle complex datasets. However, existing deep learning models primarily focus on homogeneous data, facing limitations in classification accuracy and interpretability. The complex and diverse causes of AD make it challenging to fully exploit the complementary information among different data types. To address these challenges, we propose a multi-modal data fusion method based on a Dynamic Heterogeneous Attention Network (DHAN) and Generative Adversarial Networks (GAN). The proposed method designs private graph convolutional layers and shared heterogeneous attention layers, combining dynamic graph structure updates and graph structure regularization to dynamically enhance inter-modal relationships. This approach integrates structural Magnetic Resonance Imaging (sMRI), Single Nucleotide Polymorphism (SNP), and gene expression (GENE) data. Additionally, GANs are utilized to generate synthetic data to augment the training set, enhancing the model's robustness and generalization ability. Experimental results demonstrate that the proposed DHAN-GAN model achieves outstanding performance in AD classification tasks, with an ACC reaching 92.31 %. The classification accuracy exceeds traditional methods by over 10 % and significantly outperforms other comparative models in metrics such as precision, recall, and F1 score. This study provides a novel and effective solution for the application of multi-modal data fusion in Alzheimer's disease classification.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"26 ","pages":"Article 100415"},"PeriodicalIF":2.3000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal data fusion for Alzheimer's disease based on dynamic heterogeneous graph convolutional neural network and generative adversarial network\",\"authors\":\"Xiaoyu Chen, Shuaiqun Wang, Wei Kong\",\"doi\":\"10.1016/j.array.2025.100415\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Alzheimer's disease (AD) is a complex neurodegenerative disorder, and understanding its pathogenic mechanisms is crucial for accurate diagnosis. Current research has progressed from single-modal data analysis to multi-modal data fusion, leveraging deep learning's efficient data analysis capabilities to handle complex datasets. However, existing deep learning models primarily focus on homogeneous data, facing limitations in classification accuracy and interpretability. The complex and diverse causes of AD make it challenging to fully exploit the complementary information among different data types. To address these challenges, we propose a multi-modal data fusion method based on a Dynamic Heterogeneous Attention Network (DHAN) and Generative Adversarial Networks (GAN). The proposed method designs private graph convolutional layers and shared heterogeneous attention layers, combining dynamic graph structure updates and graph structure regularization to dynamically enhance inter-modal relationships. This approach integrates structural Magnetic Resonance Imaging (sMRI), Single Nucleotide Polymorphism (SNP), and gene expression (GENE) data. Additionally, GANs are utilized to generate synthetic data to augment the training set, enhancing the model's robustness and generalization ability. Experimental results demonstrate that the proposed DHAN-GAN model achieves outstanding performance in AD classification tasks, with an ACC reaching 92.31 %. The classification accuracy exceeds traditional methods by over 10 % and significantly outperforms other comparative models in metrics such as precision, recall, and F1 score. This study provides a novel and effective solution for the application of multi-modal data fusion in Alzheimer's disease classification.</div></div>\",\"PeriodicalId\":8417,\"journal\":{\"name\":\"Array\",\"volume\":\"26 \",\"pages\":\"Article 100415\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Array\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2590005625000426\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000426","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Multimodal data fusion for Alzheimer's disease based on dynamic heterogeneous graph convolutional neural network and generative adversarial network
Alzheimer's disease (AD) is a complex neurodegenerative disorder, and understanding its pathogenic mechanisms is crucial for accurate diagnosis. Current research has progressed from single-modal data analysis to multi-modal data fusion, leveraging deep learning's efficient data analysis capabilities to handle complex datasets. However, existing deep learning models primarily focus on homogeneous data, facing limitations in classification accuracy and interpretability. The complex and diverse causes of AD make it challenging to fully exploit the complementary information among different data types. To address these challenges, we propose a multi-modal data fusion method based on a Dynamic Heterogeneous Attention Network (DHAN) and Generative Adversarial Networks (GAN). The proposed method designs private graph convolutional layers and shared heterogeneous attention layers, combining dynamic graph structure updates and graph structure regularization to dynamically enhance inter-modal relationships. This approach integrates structural Magnetic Resonance Imaging (sMRI), Single Nucleotide Polymorphism (SNP), and gene expression (GENE) data. Additionally, GANs are utilized to generate synthetic data to augment the training set, enhancing the model's robustness and generalization ability. Experimental results demonstrate that the proposed DHAN-GAN model achieves outstanding performance in AD classification tasks, with an ACC reaching 92.31 %. The classification accuracy exceeds traditional methods by over 10 % and significantly outperforms other comparative models in metrics such as precision, recall, and F1 score. This study provides a novel and effective solution for the application of multi-modal data fusion in Alzheimer's disease classification.