Res2former:一种基于多尺度融合的变压器特征提取方法

IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Bojun Xie, Yanjie Wang, Shaocong Guo, Junfen Chen
{"title":"Res2former:一种基于多尺度融合的变压器特征提取方法","authors":"Bojun Xie,&nbsp;Yanjie Wang,&nbsp;Shaocong Guo,&nbsp;Junfen Chen","doi":"10.1016/j.jvcir.2025.104546","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we propose Res2former, a novel lightweight hybrid architecture that combines convolutional neural networks (CNNs) and Transformers to effectively model both local and global dependencies in visual data. While Vision Transformer (ViT) demonstrates strong global modeling capability, it lack locality and translation-invariance, making it reliant on large-scale datasets and computational resources. To address this, Res2former adopts a stage-wise hybrid design: in shallow layers, CNNs replace Transformer blocks to exploit local inductive biases and reduce early computational cost; in deeper layers, we introduce a multi-scale fusion mechanism by embedding multiple parallel convolutional kernels of varying receptive fields into the Transformer’s MLP structure. This enables Res2former to capture multi-scale visual semantics more effectively and fuse features across different scales. Experimental results reveal that with the same parameters and computational complexity, Res2former outperforms variants of Transformer and CNN models in image classification (80.7 top-1 accuracy on ImageNet-1K), object detection (45.8 <span><math><mrow><mi>A</mi><msup><mrow><mi>P</mi></mrow><mrow><mi>b</mi><mi>o</mi><mi>x</mi></mrow></msup></mrow></math></span> on the COCO 2017 Validation Set), and instance segmentation (41.0 <span><math><mrow><mi>A</mi><msup><mrow><mi>P</mi></mrow><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></msup></mrow></math></span> on the COCO 2017 Validation Set) tasks. The code is publicly accessible at <span><span>https://github.com/hand-Max/Res2former</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"112 ","pages":"Article 104546"},"PeriodicalIF":3.1000,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Res2former: A multi-scale fusion based transformer feature extraction method\",\"authors\":\"Bojun Xie,&nbsp;Yanjie Wang,&nbsp;Shaocong Guo,&nbsp;Junfen Chen\",\"doi\":\"10.1016/j.jvcir.2025.104546\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In this paper, we propose Res2former, a novel lightweight hybrid architecture that combines convolutional neural networks (CNNs) and Transformers to effectively model both local and global dependencies in visual data. While Vision Transformer (ViT) demonstrates strong global modeling capability, it lack locality and translation-invariance, making it reliant on large-scale datasets and computational resources. To address this, Res2former adopts a stage-wise hybrid design: in shallow layers, CNNs replace Transformer blocks to exploit local inductive biases and reduce early computational cost; in deeper layers, we introduce a multi-scale fusion mechanism by embedding multiple parallel convolutional kernels of varying receptive fields into the Transformer’s MLP structure. This enables Res2former to capture multi-scale visual semantics more effectively and fuse features across different scales. Experimental results reveal that with the same parameters and computational complexity, Res2former outperforms variants of Transformer and CNN models in image classification (80.7 top-1 accuracy on ImageNet-1K), object detection (45.8 <span><math><mrow><mi>A</mi><msup><mrow><mi>P</mi></mrow><mrow><mi>b</mi><mi>o</mi><mi>x</mi></mrow></msup></mrow></math></span> on the COCO 2017 Validation Set), and instance segmentation (41.0 <span><math><mrow><mi>A</mi><msup><mrow><mi>P</mi></mrow><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></msup></mrow></math></span> on the COCO 2017 Validation Set) tasks. The code is publicly accessible at <span><span>https://github.com/hand-Max/Res2former</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":54755,\"journal\":{\"name\":\"Journal of Visual Communication and Image Representation\",\"volume\":\"112 \",\"pages\":\"Article 104546\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Visual Communication and Image Representation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1047320325001609\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325001609","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在本文中,我们提出了一种新的轻量级混合架构Res2former,它结合了卷积神经网络(cnn)和transformer来有效地建模视觉数据中的局部和全局依赖关系。Vision Transformer (ViT)具有较强的全局建模能力,但缺乏局部性和平移不变性,依赖于大规模的数据集和计算资源。为了解决这个问题,Res2former采用了分阶段混合设计:在浅层中,cnn取代Transformer块以利用局部归纳偏差并降低早期计算成本;在更深层,我们通过在Transformer的MLP结构中嵌入多个不同接受场的并行卷积核来引入多尺度融合机制。这使得Res2former能够更有效地捕获多尺度视觉语义,并融合不同尺度的特征。实验结果表明,在相同的参数和计算复杂度下,Res2former在图像分类(ImageNet-1K上的top-1准确率为80.7)、目标检测(COCO 2017验证集上的45.8 APbox)和实例分割(COCO 2017验证集上的41.0 APmask)任务上优于Transformer和CNN模型的变体。该代码可在https://github.com/hand-Max/Res2former上公开访问。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Res2former: A multi-scale fusion based transformer feature extraction method
In this paper, we propose Res2former, a novel lightweight hybrid architecture that combines convolutional neural networks (CNNs) and Transformers to effectively model both local and global dependencies in visual data. While Vision Transformer (ViT) demonstrates strong global modeling capability, it lack locality and translation-invariance, making it reliant on large-scale datasets and computational resources. To address this, Res2former adopts a stage-wise hybrid design: in shallow layers, CNNs replace Transformer blocks to exploit local inductive biases and reduce early computational cost; in deeper layers, we introduce a multi-scale fusion mechanism by embedding multiple parallel convolutional kernels of varying receptive fields into the Transformer’s MLP structure. This enables Res2former to capture multi-scale visual semantics more effectively and fuse features across different scales. Experimental results reveal that with the same parameters and computational complexity, Res2former outperforms variants of Transformer and CNN models in image classification (80.7 top-1 accuracy on ImageNet-1K), object detection (45.8 APbox on the COCO 2017 Validation Set), and instance segmentation (41.0 APmask on the COCO 2017 Validation Set) tasks. The code is publicly accessible at https://github.com/hand-Max/Res2former.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Visual Communication and Image Representation
Journal of Visual Communication and Image Representation 工程技术-计算机:软件工程
CiteScore
5.40
自引率
11.50%
发文量
188
审稿时长
9.9 months
期刊介绍: The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信