A lightweight convolution and vision transformer integrated model with multi-scale self-attention mechanism

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yi Zhang , Lingxiao Wei , Bowei Zhang , Ziwei Liu , Kai Yi , Shu Hu
{"title":"A lightweight convolution and vision transformer integrated model with multi-scale self-attention mechanism","authors":"Yi Zhang ,&nbsp;Lingxiao Wei ,&nbsp;Bowei Zhang ,&nbsp;Ziwei Liu ,&nbsp;Kai Yi ,&nbsp;Shu Hu","doi":"10.1016/j.neucom.2025.131670","DOIUrl":null,"url":null,"abstract":"<div><div>Vision Transformer (ViT) has prevailed in computer vision tasks due to its strong long-range dependency modelling ability. However, its large model size and weak local feature modeling ability hinder its application in real scenarios. To balance computational efficiency and performance in downstream vision tasks, we propose an efficient ViT model with sparse attention (dubbed SAEViT) and convolution blocks. Specifically, a Sparsely Aggregated Attention (SAA) module has been proposed to perform adaptive sparse sampling and recover the feature map via deconvolution operation, which significantly reduces the computational complexity of attention operations. In addition, a Channel-Interactive Feed-Forward Network (CIFFN) layer is developed to enhance inter-channel information exchange through feature decomposition and redistribution, which mitigates the redundancy in traditional feed-forward networks (FFN). Finally, a hierarchical pyramid structure with embedded depth-wise separable convolutional blocks (DWSConv) is devised to further strengthen convolutional features. Extensive experiments on mainstream datasets show that SAEViT achieves Top-1 accuracies of 76.3 % and 79.6 % on the ImageNet-1 K classification task with only 0.8 GFLOPs and 1.3 GFLOPs, respectively, demonstrating a lightweight solution for fundamental vision tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"658 ","pages":"Article 131670"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225023422","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Vision Transformer (ViT) has prevailed in computer vision tasks due to its strong long-range dependency modelling ability. However, its large model size and weak local feature modeling ability hinder its application in real scenarios. To balance computational efficiency and performance in downstream vision tasks, we propose an efficient ViT model with sparse attention (dubbed SAEViT) and convolution blocks. Specifically, a Sparsely Aggregated Attention (SAA) module has been proposed to perform adaptive sparse sampling and recover the feature map via deconvolution operation, which significantly reduces the computational complexity of attention operations. In addition, a Channel-Interactive Feed-Forward Network (CIFFN) layer is developed to enhance inter-channel information exchange through feature decomposition and redistribution, which mitigates the redundancy in traditional feed-forward networks (FFN). Finally, a hierarchical pyramid structure with embedded depth-wise separable convolutional blocks (DWSConv) is devised to further strengthen convolutional features. Extensive experiments on mainstream datasets show that SAEViT achieves Top-1 accuracies of 76.3 % and 79.6 % on the ImageNet-1 K classification task with only 0.8 GFLOPs and 1.3 GFLOPs, respectively, demonstrating a lightweight solution for fundamental vision tasks.
一种具有多尺度自注意机制的轻量级卷积和视觉转换器集成模型
视觉转换器(Vision Transformer, ViT)以其强大的远程依赖建模能力在计算机视觉任务中占据主导地位。然而,其模型尺寸大,局部特征建模能力弱,阻碍了其在实际场景中的应用。为了平衡下游视觉任务的计算效率和性能,我们提出了一种具有稀疏注意和卷积块的高效ViT模型(称为SAEViT)。具体而言,提出了稀疏聚合注意力(SAA)模块,通过反卷积操作进行自适应稀疏采样并恢复特征映射,显著降低了注意力操作的计算复杂度。此外,提出了通道交互前馈网络(CIFFN)层,通过特征分解和再分配增强通道间信息交换,降低了传统前馈网络(FFN)的冗余性。最后,设计了嵌入深度可分离卷积块(DWSConv)的分层金字塔结构,进一步增强了卷积特征。在主流数据集上的大量实验表明,SAEViT在ImageNet-1 K分类任务上的Top-1准确率分别为76.3%和79.6%,分别为0.8 GFLOPs和1.3 GFLOPs,展示了一种轻量级的基本视觉任务解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信