Bhoomi Gupta , Ganesh Kanna Jegannathan , Mohammad Shabbir Alam , Kottala Sri Yogi , Janjhyam Venkata Naga Ramesh , Vemula Jasmine Sowmya , Isa Bayhan
{"title":"综合神经影像学和认知评分的阿尔茨海默病多模态轻量级神经网络诊断","authors":"Bhoomi Gupta , Ganesh Kanna Jegannathan , Mohammad Shabbir Alam , Kottala Sri Yogi , Janjhyam Venkata Naga Ramesh , Vemula Jasmine Sowmya , Isa Bayhan","doi":"10.1016/j.neuri.2025.100218","DOIUrl":null,"url":null,"abstract":"<div><div>Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100218"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal lightweight neural network for Alzheimer's disease diagnosis integrating neuroimaging and cognitive scores\",\"authors\":\"Bhoomi Gupta , Ganesh Kanna Jegannathan , Mohammad Shabbir Alam , Kottala Sri Yogi , Janjhyam Venkata Naga Ramesh , Vemula Jasmine Sowmya , Isa Bayhan\",\"doi\":\"10.1016/j.neuri.2025.100218\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 3\",\"pages\":\"Article 100218\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772528625000330\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multimodal lightweight neural network for Alzheimer's disease diagnosis integrating neuroimaging and cognitive scores
Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.
Neuroscience informaticsSurgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology