{"title":"Fusing global–local feature bank for single image super-resolution","authors":"Zhiyuan Xu, Chuan Lin, Hao Yan, Ningning Guo","doi":"10.1016/j.displa.2024.102932","DOIUrl":null,"url":null,"abstract":"<div><div>Previous work has shown that Transformer-based methods, which have achieved remarkable success in natural language processing (NLP), have also made significant strides in image super-resolution (e.g., SwinIR). However, these methods primarily focus on dynamically establishing long-range relationships between pixels, emphasize the reconstruction of image edges and overall structure. And they tend to overlook local texture details, making it challenging to achieve more detailed images. In order to obtain more texture information for better reconstruction, the global–local feature bank fusion network (GLFBFNet) is presented. It is a simple but effective method that attends to local contextual information while modeling long-range dependencies, and establishes a feature bank to store the extracted features, enabling the comprehensive and complete information to participate in super-resolution image reconstruction. The core components of GLFBFNet are the dual branch block (DBB) and the global–local feature bank (GLFB). The dual branch block (DBB) strikes a balance between global and local modeling, facilitating their collaborative involvement in super-resolution reconstruction. The global–local feature bank (GLFB), despite its simple structure, prevents the loss of crucial information, thereby obtaining richer information to participate in reconstruction. These two core components are straightforward to implement and can be easily applied to existing Transformer-based methods. Experimental results demonstrate that our GLFBFNet achieves PSNR scores of 33.89 dB and 39.74 dB on the Urban100 and Manga109 datasets, respectively, surpassing SwinIR by 0.49 dB and 0.14 dB respectively.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102932"},"PeriodicalIF":3.7000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224002968","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Previous work has shown that Transformer-based methods, which have achieved remarkable success in natural language processing (NLP), have also made significant strides in image super-resolution (e.g., SwinIR). However, these methods primarily focus on dynamically establishing long-range relationships between pixels, emphasize the reconstruction of image edges and overall structure. And they tend to overlook local texture details, making it challenging to achieve more detailed images. In order to obtain more texture information for better reconstruction, the global–local feature bank fusion network (GLFBFNet) is presented. It is a simple but effective method that attends to local contextual information while modeling long-range dependencies, and establishes a feature bank to store the extracted features, enabling the comprehensive and complete information to participate in super-resolution image reconstruction. The core components of GLFBFNet are the dual branch block (DBB) and the global–local feature bank (GLFB). The dual branch block (DBB) strikes a balance between global and local modeling, facilitating their collaborative involvement in super-resolution reconstruction. The global–local feature bank (GLFB), despite its simple structure, prevents the loss of crucial information, thereby obtaining richer information to participate in reconstruction. These two core components are straightforward to implement and can be easily applied to existing Transformer-based methods. Experimental results demonstrate that our GLFBFNet achieves PSNR scores of 33.89 dB and 39.74 dB on the Urban100 and Manga109 datasets, respectively, surpassing SwinIR by 0.49 dB and 0.14 dB respectively.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.