Attention-Based Dual-Knowledge Distillation for Alzheimer’s Disease Stage Detection Using MRI Scans

IF 1.5
Chandita Barman;Sudhanshu Singh;Manob Jyoti Saikia;Shovan Barma
{"title":"Attention-Based Dual-Knowledge Distillation for Alzheimer’s Disease Stage Detection Using MRI Scans","authors":"Chandita Barman;Sudhanshu Singh;Manob Jyoti Saikia;Shovan Barma","doi":"10.1109/OJIM.2025.3589698","DOIUrl":null,"url":null,"abstract":"This study presents an efficient attention-guided dual-knowledge distillation (D-KD) framework for classifying Alzheimer’s disease (AD) stages using magnetic resonance imaging (MRI) scans based on detection of the subtle anatomical differences. Current challenges involve identifying precise discriminating features in low computational complexity without compromising classification accuracy. In this work, a dual-teacher model consisting of vision transformer (ViT) and swin transformer (ST) for capturing global and local features, respectively, is utilized to distill comprehensive knowledge into a lightweight ViT-based student model, ensuring accurate classification efficacy with reduced computational demands. For validation of the proposed idea, two well-known benchmark MRI datasets, Alzheimer’s Disease Neuroimaging Initiative (ADNI) and AIBL, have been considered for multiclass classification, using an online-training knowledge distillation approach, where teacher and student networks are trained concurrently. The proposed model has achieved accuracies (Ac) up to 98.24% and 97.07% on ADNI and AIBL, respectively, with a significant performance improvement of 15.6% with respect to existing works. The analysis shows that by leveraging the complementary strengths of ViT and ST, the D-KD strategy enhances generalization in data-limited scenarios and provides a reliable, resource-efficient solution for MRI-based AD diagnosis.","PeriodicalId":100630,"journal":{"name":"IEEE Open Journal of Instrumentation and Measurement","volume":"4 ","pages":"1-10"},"PeriodicalIF":1.5000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11082332","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Instrumentation and Measurement","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11082332/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study presents an efficient attention-guided dual-knowledge distillation (D-KD) framework for classifying Alzheimer’s disease (AD) stages using magnetic resonance imaging (MRI) scans based on detection of the subtle anatomical differences. Current challenges involve identifying precise discriminating features in low computational complexity without compromising classification accuracy. In this work, a dual-teacher model consisting of vision transformer (ViT) and swin transformer (ST) for capturing global and local features, respectively, is utilized to distill comprehensive knowledge into a lightweight ViT-based student model, ensuring accurate classification efficacy with reduced computational demands. For validation of the proposed idea, two well-known benchmark MRI datasets, Alzheimer’s Disease Neuroimaging Initiative (ADNI) and AIBL, have been considered for multiclass classification, using an online-training knowledge distillation approach, where teacher and student networks are trained concurrently. The proposed model has achieved accuracies (Ac) up to 98.24% and 97.07% on ADNI and AIBL, respectively, with a significant performance improvement of 15.6% with respect to existing works. The analysis shows that by leveraging the complementary strengths of ViT and ST, the D-KD strategy enhances generalization in data-limited scenarios and provides a reliable, resource-efficient solution for MRI-based AD diagnosis.
基于注意力的双知识精馏在阿尔茨海默病MRI分期检测中的应用
本研究提出了一种有效的注意力引导双知识蒸馏(D-KD)框架,用于基于检测细微解剖差异的磁共振成像(MRI)扫描对阿尔茨海默病(AD)分期进行分类。当前的挑战是在不影响分类精度的情况下,在低计算复杂度下识别精确的区分特征。本文利用视觉变压器(vision transformer, ViT)和旋转变压器(swin transformer, ST)组成的双教师模型,分别捕获全局和局部特征,将全面的知识提炼成基于视觉变压器的轻量级学生模型,在减少计算需求的同时保证了准确的分类效果。为了验证所提出的想法,两个著名的基准MRI数据集,阿尔茨海默病神经成像倡议(ADNI)和AIBL,已经考虑使用在线培训知识蒸馏方法进行多类分类,其中教师和学生网络同时进行培训。该模型在ADNI和AIBL上的准确率(Ac)分别达到98.24%和97.07%,相对于现有工作,性能显著提高15.6%。分析表明,通过利用ViT和ST的互补优势,D-KD策略增强了数据有限情况下的通用性,并为基于mri的AD诊断提供了可靠、资源高效的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信