Shuang-Qing Wang , Cui-Na Jiao , Ying-Lian Gao , Xin-Chun Cui , Yan-Li Wang , Jin-Xing Liu
{"title":"Deep association analysis framework with multi-modal attention fusion for brain imaging genetics","authors":"Shuang-Qing Wang , Cui-Na Jiao , Ying-Lian Gao , Xin-Chun Cui , Yan-Li Wang , Jin-Xing Liu","doi":"10.1016/j.media.2025.103827","DOIUrl":null,"url":null,"abstract":"<div><div>Brain imaging genetics is a crucial technique that integrates analysis of genetic variation and imaging quantitative traits to provide new insights into genetic mechanisms and phenotypic characteristics of the brain. With the advancement of medical imaging technology, correlation analysis between multi-modal imaging and genetic data has gradually gained widespread attention. However, existing methods usually employ simple concatenation to combine multi-modal imaging features, overlooking the interaction and complementary information between modalities. Moreover, traditional correlation analysis is used for the joint study of phenotypic and genotypic, resulting in an incomplete exploration of the complex intrinsic associations between them. Therefore, in this paper, a deep association analysis framework with multi-modal attention fusion (DAAMAF) is proposed for the early diagnosis of Alzheimer’s disease (AD). First, multi-modal feature representations are extracted from the imaging genetics data to achieve nonlinear mapping and obtain enriched information. Then, we design a cross-modal attention network to learn the interaction between multi-modal imaging features for better utilizing their complementary roles in disease diagnosis. Genetic information is mapped onto the imaging representation through a generative network to capture the complicated intrinsic associations between neuroimaging and genetics. Finally, the diagnostic module is utilized for performance analysis and disease-related biomarkers detection. Experiments on the AD Neuroimaging Initiative dataset demonstrate that DAAMAF displays superior performance and discovers biomarkers associated with AD, promising to make a significant contribution to understanding the pathogenesis of the disease. The codes are publicly available at <span><span>https://github.com/Yeah123456ye/DAAMAF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103827"},"PeriodicalIF":11.8000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525003731","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Brain imaging genetics is a crucial technique that integrates analysis of genetic variation and imaging quantitative traits to provide new insights into genetic mechanisms and phenotypic characteristics of the brain. With the advancement of medical imaging technology, correlation analysis between multi-modal imaging and genetic data has gradually gained widespread attention. However, existing methods usually employ simple concatenation to combine multi-modal imaging features, overlooking the interaction and complementary information between modalities. Moreover, traditional correlation analysis is used for the joint study of phenotypic and genotypic, resulting in an incomplete exploration of the complex intrinsic associations between them. Therefore, in this paper, a deep association analysis framework with multi-modal attention fusion (DAAMAF) is proposed for the early diagnosis of Alzheimer’s disease (AD). First, multi-modal feature representations are extracted from the imaging genetics data to achieve nonlinear mapping and obtain enriched information. Then, we design a cross-modal attention network to learn the interaction between multi-modal imaging features for better utilizing their complementary roles in disease diagnosis. Genetic information is mapped onto the imaging representation through a generative network to capture the complicated intrinsic associations between neuroimaging and genetics. Finally, the diagnostic module is utilized for performance analysis and disease-related biomarkers detection. Experiments on the AD Neuroimaging Initiative dataset demonstrate that DAAMAF displays superior performance and discovers biomarkers associated with AD, promising to make a significant contribution to understanding the pathogenesis of the disease. The codes are publicly available at https://github.com/Yeah123456ye/DAAMAF.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.