Lightweight multi-scale global attention enhancement network for image super-resolution

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yue Huang , Pan Wang , Yumei Zheng , Bochuan Zheng
{"title":"Lightweight multi-scale global attention enhancement network for image super-resolution","authors":"Yue Huang ,&nbsp;Pan Wang ,&nbsp;Yumei Zheng ,&nbsp;Bochuan Zheng","doi":"10.1016/j.imavis.2025.105671","DOIUrl":null,"url":null,"abstract":"<div><div>The Transformer-based depth model has achieved impressive results in the field of image super-resolution (SR). However, these algorithms still face a series of complex problems: redundant attention operations lead to low resource utilization, and the sliding window mechanism limits the ability to capture multi-scale feature information. To address these issues, this paper proposes a lightweight multi-scale global attention enhancement network (LMGAE-Net). Specifically, to overcome the window limitations in Transformer models, we introduce a multi-scale global attack block (MGAB), which significantly enhances the model’s ability to capture long-range information by grouping input features and calculating self-attention with varying window sizes. In addition, we propose a multi-group shift fusion block (MSFB), which divides features into equal groups and shifts them in different spatial directions. While maintaining the parameter quantity equivalent to 1×1 convolution, it expands the receptive field, improves the learning and fusion effect of local features, and further enhances the network’s ability to recover image details. Extensive experiments demonstrate that LMGAE-Net outperforms state-of-the-art lightweight SR methods by a large margin.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105671"},"PeriodicalIF":4.2000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002598","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The Transformer-based depth model has achieved impressive results in the field of image super-resolution (SR). However, these algorithms still face a series of complex problems: redundant attention operations lead to low resource utilization, and the sliding window mechanism limits the ability to capture multi-scale feature information. To address these issues, this paper proposes a lightweight multi-scale global attention enhancement network (LMGAE-Net). Specifically, to overcome the window limitations in Transformer models, we introduce a multi-scale global attack block (MGAB), which significantly enhances the model’s ability to capture long-range information by grouping input features and calculating self-attention with varying window sizes. In addition, we propose a multi-group shift fusion block (MSFB), which divides features into equal groups and shifts them in different spatial directions. While maintaining the parameter quantity equivalent to 1×1 convolution, it expands the receptive field, improves the learning and fusion effect of local features, and further enhances the network’s ability to recover image details. Extensive experiments demonstrate that LMGAE-Net outperforms state-of-the-art lightweight SR methods by a large margin.
面向图像超分辨率的轻量级多尺度全局注意力增强网络
基于transformer的深度模型在图像超分辨率(SR)领域取得了令人瞩目的成果。然而,这些算法仍然面临着一系列复杂的问题:冗余的注意力操作导致资源利用率低,滑动窗口机制限制了捕获多尺度特征信息的能力。为了解决这些问题,本文提出了一个轻量级的多尺度全局注意力增强网络(LMGAE-Net)。具体来说,为了克服Transformer模型的窗口限制,我们引入了一个多尺度全局攻击块(MGAB),通过分组输入特征和计算不同窗口大小的自关注,显著提高了模型捕获远程信息的能力。此外,我们提出了一种多组移动融合块(MSFB),将特征划分为相等的组,并在不同的空间方向上移动。在保持相当于1×1卷积的参数量的同时,扩大了接受野,提高了局部特征的学习和融合效果,进一步增强了网络对图像细节的恢复能力。大量的实验表明,LMGAE-Net在很大程度上优于最先进的轻量级SR方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信