基于自关注ProGAN数据增强的跨svit宽剩余挤压和激励网络用于阿尔茨海默病分类

Rahma Kadri, Bassem Bouaziz, M. Tmar, F. Gargouri
{"title":"基于自关注ProGAN数据增强的跨svit宽剩余挤压和激励网络用于阿尔茨海默病分类","authors":"Rahma Kadri, Bassem Bouaziz, M. Tmar, F. Gargouri","doi":"10.3233/his-220002","DOIUrl":null,"url":null,"abstract":"Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"24 1","pages":"163-177"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CrossViT Wide Residual Squeeze-and-Excitation Network for Alzheimer's disease classification with self attention ProGAN data augmentation\",\"authors\":\"Rahma Kadri, Bassem Bouaziz, M. Tmar, F. Gargouri\",\"doi\":\"10.3233/his-220002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.\",\"PeriodicalId\":88526,\"journal\":{\"name\":\"International journal of hybrid intelligent systems\",\"volume\":\"24 1\",\"pages\":\"163-177\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of hybrid intelligent systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/his-220002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of hybrid intelligent systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/his-220002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于神经影像学数据对阿尔茨海默病(AD)进行有效、准确的早期预测已引起许多研究者的兴趣,以防止其发展。深度学习网络已经证明了分析大规模多模态神经成像用于AD分类的最佳能力。深度学习中应用最广泛的架构是卷积神经网络(CNN),它在AD检测中显示出巨大的潜力。然而,CNN不能捕获输入图像中的长距离依赖关系,也不能确保良好的全局特征提取。此外,通过增加核大小来增加CNN的接受场可能会导致特征粒度损失。另一个限制是CNN缺乏图像特征的加权机制;该网络并不关注图像中的相关特征。近年来,视觉变压器在CNN的基础上表现出了突出的性能,克服了CNN的主要局限性。视觉转换器依赖于自我关注层。这种新技术的主要缺点是它需要大量的训练数据。在本文中,我们将这两种体系结构的主要优势结合起来进行AD分类。提出了一种基于交叉ViT和宽残余挤压激励网络相结合的新方法。我们从阿尔茨海默病神经影像学倡议(ADNI)和开放获取系列影像学研究(OASIS)获得MRI数据。为了克服数据的局限性,我们提出了一种基于自关注渐进式生成对抗神经网络的数据增强方法。我们提出的方法达到了99%的分类准确率,并且优于CNN模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CrossViT Wide Residual Squeeze-and-Excitation Network for Alzheimer's disease classification with self attention ProGAN data augmentation
Efficient and accurate early prediction of Alzheimer's disease (AD) based on the neuroimaging data has attracted interest from many researchers to prevent its progression. Deep learning networks have demonstrated an optimal ability to analyse large-scale multimodal neuroimaging for AD classification. The most widely used architecture of deep learning is the Convolution neural networks (CNN) that have shown great potential in AD detection. However CNN does not capture long range dependencies within the input image and does not ensure a good global feature extraction. Furthermore, increasing the receptive field of CNN by increasing the kernels sizes can cause a feature granularity loss. Another limitation is that CNN lacks a weighing mechanism of image features; the network doesn’t focus on the relevant features within the image. Recently,vision transformer have shown an outstanding performance over the CNN and overcomes its main limitations. The vision transformer relies on the self-attention layers. The main drawbacks of this new technique is that it requires a huge amount of training data. In this paper, we combined the main strengths of these two architectures for AD classification. We proposed a new method based on the combination of the Cross ViT and Wide Residual Squeeze-and-Excitation Network. We acquired MRI data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS). We also proposed a new data augmentation based on the self attention progressive generative adversarial neural network to overcome the limitation of the data. Our proposed method achieved 99% classification accuracy and outperforms CNN models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.30
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信