跨层引导的多模态正交融合网络用于阿尔茨海默病诊断

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yumiao Zhao , Bo Jiang , Yuan Chen , Ye Luo , Jin Tang
{"title":"跨层引导的多模态正交融合网络用于阿尔茨海默病诊断","authors":"Yumiao Zhao ,&nbsp;Bo Jiang ,&nbsp;Yuan Chen ,&nbsp;Ye Luo ,&nbsp;Jin Tang","doi":"10.1016/j.neunet.2025.108091","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modal neuroimaging techniques are widely employed for the accurate diagnosis of Alzheimer’s Disease (AD). Existing fusion methods typically focus on capturing semantic correlations between modalities through feature-level interactions. However, they fail to suppress redundant cross-modal information, resulting in sub-optimal multi-modal representation. Moreover, these methods ignore subject-specific differences in modality contributions. To address these challenges, we propose a novel Multi-modal Orthogonal Fusion Network via cross-layer guidance (MOFNet) to effectively fuse multi-modal information for AD diagnosis. We first design a Cross-layer Guidance Interaction module (CGI), leveraging high-level features to guide the learning of low-level features, thereby enhancing the fine-grained representations on disease-relevant regions. Then, we introduce a Multi-modal Orthogonal Compensation module (MOC) to realize bidirectional interaction between modalities. MOC encourages each modality to compensate for its limitations by learning orthogonal components from other modalities. Finally, a Feature Enhancement Fusion module (FEF) is developed to adaptively fuse multi-modal features based on the contributions of different modalities. Extensive experiments on the ADNI dataset demonstrate that MOFNet achieves superior performance in AD classification tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108091"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-modal orthogonal fusion network via cross-layer guidance for Alzheimer’s disease diagnosis\",\"authors\":\"Yumiao Zhao ,&nbsp;Bo Jiang ,&nbsp;Yuan Chen ,&nbsp;Ye Luo ,&nbsp;Jin Tang\",\"doi\":\"10.1016/j.neunet.2025.108091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-modal neuroimaging techniques are widely employed for the accurate diagnosis of Alzheimer’s Disease (AD). Existing fusion methods typically focus on capturing semantic correlations between modalities through feature-level interactions. However, they fail to suppress redundant cross-modal information, resulting in sub-optimal multi-modal representation. Moreover, these methods ignore subject-specific differences in modality contributions. To address these challenges, we propose a novel Multi-modal Orthogonal Fusion Network via cross-layer guidance (MOFNet) to effectively fuse multi-modal information for AD diagnosis. We first design a Cross-layer Guidance Interaction module (CGI), leveraging high-level features to guide the learning of low-level features, thereby enhancing the fine-grained representations on disease-relevant regions. Then, we introduce a Multi-modal Orthogonal Compensation module (MOC) to realize bidirectional interaction between modalities. MOC encourages each modality to compensate for its limitations by learning orthogonal components from other modalities. Finally, a Feature Enhancement Fusion module (FEF) is developed to adaptively fuse multi-modal features based on the contributions of different modalities. Extensive experiments on the ADNI dataset demonstrate that MOFNet achieves superior performance in AD classification tasks.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"194 \",\"pages\":\"Article 108091\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025009712\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025009712","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模态神经成像技术被广泛应用于阿尔茨海默病(AD)的准确诊断。现有的融合方法通常侧重于通过特征级交互捕获模态之间的语义相关性。然而,它们不能抑制冗余的跨模态信息,导致次优的多模态表示。此外,这些方法忽略了主体在情态贡献方面的具体差异。为了解决这些问题,我们提出了一种新的基于跨层引导的多模态正交融合网络(MOFNet),以有效地融合多模态信息用于AD诊断。我们首先设计了一个跨层引导交互模块(CGI),利用高层特征来指导低层特征的学习,从而增强对疾病相关区域的细粒度表示。然后,我们引入了一个多模态正交补偿模块(MOC)来实现模态之间的双向交互。MOC鼓励每个模态通过学习其他模态的正交分量来弥补其局限性。最后,开发了特征增强融合模块(FEF),根据不同模态的贡献自适应融合多模态特征。在ADNI数据集上的大量实验表明,MOFNet在AD分类任务中取得了优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi-modal orthogonal fusion network via cross-layer guidance for Alzheimer’s disease diagnosis
Multi-modal neuroimaging techniques are widely employed for the accurate diagnosis of Alzheimer’s Disease (AD). Existing fusion methods typically focus on capturing semantic correlations between modalities through feature-level interactions. However, they fail to suppress redundant cross-modal information, resulting in sub-optimal multi-modal representation. Moreover, these methods ignore subject-specific differences in modality contributions. To address these challenges, we propose a novel Multi-modal Orthogonal Fusion Network via cross-layer guidance (MOFNet) to effectively fuse multi-modal information for AD diagnosis. We first design a Cross-layer Guidance Interaction module (CGI), leveraging high-level features to guide the learning of low-level features, thereby enhancing the fine-grained representations on disease-relevant regions. Then, we introduce a Multi-modal Orthogonal Compensation module (MOC) to realize bidirectional interaction between modalities. MOC encourages each modality to compensate for its limitations by learning orthogonal components from other modalities. Finally, a Feature Enhancement Fusion module (FEF) is developed to adaptively fuse multi-modal features based on the contributions of different modalities. Extensive experiments on the ADNI dataset demonstrate that MOFNet achieves superior performance in AD classification tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信