Multimodal lightweight neural network for Alzheimer's disease diagnosis integrating neuroimaging and cognitive scores

Bhoomi Gupta , Ganesh Kanna Jegannathan , Mohammad Shabbir Alam , Kottala Sri Yogi , Janjhyam Venkata Naga Ramesh , Vemula Jasmine Sowmya , Isa Bayhan
{"title":"Multimodal lightweight neural network for Alzheimer's disease diagnosis integrating neuroimaging and cognitive scores","authors":"Bhoomi Gupta ,&nbsp;Ganesh Kanna Jegannathan ,&nbsp;Mohammad Shabbir Alam ,&nbsp;Kottala Sri Yogi ,&nbsp;Janjhyam Venkata Naga Ramesh ,&nbsp;Vemula Jasmine Sowmya ,&nbsp;Isa Bayhan","doi":"10.1016/j.neuri.2025.100218","DOIUrl":null,"url":null,"abstract":"<div><div>Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100218"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.
综合神经影像学和认知评分的阿尔茨海默病多模态轻量级神经网络诊断
传统的用于阿尔茨海默病(AD)辅助诊断的单模态方法面临一些限制,包括专业注释的成像数据集可用性不足,特征提取不稳定以及计算需求高。为了解决这些挑战,我们提出了Light-Mo-DAD,这是一个轻量级的多模态诊断神经网络,旨在整合MRI, PET成像和神经心理学评估评分,以增强AD的检测。在神经成像特征提取模块中,采用了减少冗余的卷积运算来捕获细粒度的局部特征,同时采用了全局过滤机制来提取整体空间模式。通过空间图像配准和求和实现多模态特征融合,从而实现结构和功能成像模式的有效整合。神经认知特征提取模块利用深度可分离卷积来处理认知评估数据,然后将其与多模态成像特征融合。为了进一步提高模型的判别能力,本文采用了迁移学习技术。采用多层感知器(MLP)分类器捕获复杂的特征交互,提高诊断精度。对ADNI数据集的评估表明,Light-Mo-DAD的准确率为98.0%,灵敏度为98.5%,特异性为97.5%,突出了其在早期AD检测中的稳健性。这些结果表明,所提出的架构不仅提高了诊断的准确性,而且为临床环境中的实时、移动部署提供了强大的潜力,支持神经科医生高效、可靠地诊断阿尔茨海默病。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信