Multi-scale spatial pyramid attention mechanism for image recognition: An effective approach

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Yang Yu, Yi Zhang, Zeyu Cheng, Zhe Song, Chengkai Tang
{"title":"Multi-scale spatial pyramid attention mechanism for image recognition: An effective approach","authors":"Yang Yu,&nbsp;Yi Zhang,&nbsp;Zeyu Cheng,&nbsp;Zhe Song,&nbsp;Chengkai Tang","doi":"10.1016/j.engappai.2024.108261","DOIUrl":null,"url":null,"abstract":"<div><p>Attention mechanisms have gradually become necessary to enhance the representational power of convolutional neural networks (CNNs). Despite recent progress in attention mechanism research, some open problems still exist. Most existing methods ignore modeling multi-scale feature representations, structural information, and long-range channel dependencies, which are essential for delivering more discriminative attention maps. This study proposes a novel, low-overhead, high-performance attention mechanism with strong generalization ability for various networks and datasets. This mechanism is called Multi-Scale Spatial Pyramid Attention (MSPA) and can be used to solve the limitations of other attention methods. For the critical components of MSPA, we not only develop the Hierarchical-Phantom Convolution (HPC) module, which can extract multi-scale spatial information at a more granular level utilizing hierarchical residual-like connections, but also design the Spatial Pyramid Recalibration (SPR) module, which can integrate structural regularization and structural information in an adaptive combination mechanism, while employing the Softmax operation to build long-range channel dependencies. The proposed MSPA is a powerful tool that can be conveniently embedded into various CNNs as a plug-and-play component. Correspondingly, using MSPA to replace the 3 × 3 convolution in the bottleneck residual blocks of ResNets, we created a series of simple and efficient backbones named MSPANet, which naturally inherit the advantages of MSPA. Without bells and whistles, our method substantially outperforms other state-of-the-art counterparts in all evaluation metrics based on extensive experimental results from CIFAR-100 and ImageNet-1K image recognition. When applying MSPA to ResNet-50, our model achieves top-1 classification accuracy of 81.74% and 78.40% on the CIFAR-100 and ImageNet-1K benchmarks, exceeding the corresponding baselines by 3.95% and 2.27%, respectively. We also obtained promising performance improvements of 1.15% and 0.91% compared to the competitive EPSANet-50. In addition, empirical research results in autonomous driving engineering applications also demonstrate that our method can significantly improve the accuracy and real-time performance of image recognition with cheaper overhead. Our code is publicly available at <span>https://github.com/ndsclark/MSPANet</span><svg><path></path></svg>.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"133 ","pages":"Article 108261"},"PeriodicalIF":7.5000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624004196","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Attention mechanisms have gradually become necessary to enhance the representational power of convolutional neural networks (CNNs). Despite recent progress in attention mechanism research, some open problems still exist. Most existing methods ignore modeling multi-scale feature representations, structural information, and long-range channel dependencies, which are essential for delivering more discriminative attention maps. This study proposes a novel, low-overhead, high-performance attention mechanism with strong generalization ability for various networks and datasets. This mechanism is called Multi-Scale Spatial Pyramid Attention (MSPA) and can be used to solve the limitations of other attention methods. For the critical components of MSPA, we not only develop the Hierarchical-Phantom Convolution (HPC) module, which can extract multi-scale spatial information at a more granular level utilizing hierarchical residual-like connections, but also design the Spatial Pyramid Recalibration (SPR) module, which can integrate structural regularization and structural information in an adaptive combination mechanism, while employing the Softmax operation to build long-range channel dependencies. The proposed MSPA is a powerful tool that can be conveniently embedded into various CNNs as a plug-and-play component. Correspondingly, using MSPA to replace the 3 × 3 convolution in the bottleneck residual blocks of ResNets, we created a series of simple and efficient backbones named MSPANet, which naturally inherit the advantages of MSPA. Without bells and whistles, our method substantially outperforms other state-of-the-art counterparts in all evaluation metrics based on extensive experimental results from CIFAR-100 and ImageNet-1K image recognition. When applying MSPA to ResNet-50, our model achieves top-1 classification accuracy of 81.74% and 78.40% on the CIFAR-100 and ImageNet-1K benchmarks, exceeding the corresponding baselines by 3.95% and 2.27%, respectively. We also obtained promising performance improvements of 1.15% and 0.91% compared to the competitive EPSANet-50. In addition, empirical research results in autonomous driving engineering applications also demonstrate that our method can significantly improve the accuracy and real-time performance of image recognition with cheaper overhead. Our code is publicly available at https://github.com/ndsclark/MSPANet.

用于图像识别的多尺度空间金字塔注意机制:一种有效的方法
注意力机制已逐渐成为增强卷积神经网络(CNN)表征能力的必要条件。尽管注意力机制研究近年来取得了进展,但仍存在一些未决问题。大多数现有方法都忽略了多尺度特征表征、结构信息和长程通道依赖性的建模,而这些对于提供更具辨别力的注意力图谱至关重要。本研究提出了一种新颖、低开销、高性能的注意力机制,它对各种网络和数据集都有很强的泛化能力。这种机制被称为多尺度空间金字塔注意(MSPA),可用于解决其他注意方法的局限性。针对 MSPA 的关键组件,我们不仅开发了层次幻象卷积(HPC)模块,利用层次残差样连接在更细的层次上提取多尺度空间信息,还设计了空间金字塔重校准(SPR)模块,在自适应组合机制中整合结构正则化和结构信息,同时利用 Softmax 运算建立长程信道依赖关系。所提出的 MSPA 是一种功能强大的工具,可以作为即插即用组件方便地嵌入到各种 CNN 中。相应地,利用 MSPA 代替 ResNets 瓶颈残差块中的 3 × 3 卷积,我们创建了一系列简单高效的骨干网,命名为 MSPANet,自然而然地继承了 MSPA 的优点。基于 CIFAR-100 和 ImageNet-1K 图像识别的大量实验结果,我们的方法在所有评估指标上都大大优于其他最先进的方法。将 MSPA 应用于 ResNet-50 时,我们的模型在 CIFAR-100 和 ImageNet-1K 基准上的分类准确率分别达到了 81.74% 和 78.40%,分别比相应的基准高出 3.95% 和 2.27%。与具有竞争力的 EPSANet-50 相比,我们的性能也分别提高了 1.15% 和 0.91%,成绩喜人。此外,在自动驾驶工程应用中的实证研究结果也证明,我们的方法能以更低的开销显著提高图像识别的准确性和实时性。我们的代码可在 https://github.com/ndsclark/MSPANet 公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信