{"title":"用于图像超分辨率的轻量级高频曼巴网络。","authors":"Tao Wu, Wei Xu, Yajuan Wu","doi":"10.1038/s41598-025-11663-x","DOIUrl":null,"url":null,"abstract":"<p><p>After continuous development, many researchers are exploring how to better utilize global and local information in single image super-resolution (SISR). Various methods based on convolutional neural network (CNN) and Transformer structures have emerged, but few studies have mentioned how to combine these two parts of information. We study the use of self-attention mechanism to integrate local and global information, aiming to make the model better balance the weights of the two parts of information. At the same time, in order to avoid the huge amount of computation brought by Transformer, we use the selective state space model VMamba to extract global information to achieve the effect of reducing computational complexity and lightweight network. Based on the above situation, we propose a High-frequency Mamba Network (HFMN) for SISR, which includes the local high-frequency extraction module Local High-Frequency Feature Block (LHFB), the global feature extraction module Mamba-Based Attention Block (MAB) based on VMamba, and the dual attention fusion module Dual-information Interactive Attention Block (DIAB). It can better incorporate local and global information and has linear complexity in the global feature extraction branch. Experiments on multiple benchmark datasets demonstrate that the network outforms recent SOTA methods in SISR while using fewer parameters. All codes are available at https://github.com/taoWuuu/HFMN .</p>","PeriodicalId":21811,"journal":{"name":"Scientific Reports","volume":"15 1","pages":"25973"},"PeriodicalIF":3.9000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12271450/pdf/","citationCount":"0","resultStr":"{\"title\":\"A lightweight high-frequency mamba network for image super-resolution.\",\"authors\":\"Tao Wu, Wei Xu, Yajuan Wu\",\"doi\":\"10.1038/s41598-025-11663-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>After continuous development, many researchers are exploring how to better utilize global and local information in single image super-resolution (SISR). Various methods based on convolutional neural network (CNN) and Transformer structures have emerged, but few studies have mentioned how to combine these two parts of information. We study the use of self-attention mechanism to integrate local and global information, aiming to make the model better balance the weights of the two parts of information. At the same time, in order to avoid the huge amount of computation brought by Transformer, we use the selective state space model VMamba to extract global information to achieve the effect of reducing computational complexity and lightweight network. Based on the above situation, we propose a High-frequency Mamba Network (HFMN) for SISR, which includes the local high-frequency extraction module Local High-Frequency Feature Block (LHFB), the global feature extraction module Mamba-Based Attention Block (MAB) based on VMamba, and the dual attention fusion module Dual-information Interactive Attention Block (DIAB). It can better incorporate local and global information and has linear complexity in the global feature extraction branch. Experiments on multiple benchmark datasets demonstrate that the network outforms recent SOTA methods in SISR while using fewer parameters. All codes are available at https://github.com/taoWuuu/HFMN .</p>\",\"PeriodicalId\":21811,\"journal\":{\"name\":\"Scientific Reports\",\"volume\":\"15 1\",\"pages\":\"25973\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-07-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12271450/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Scientific Reports\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1038/s41598-025-11663-x\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scientific Reports","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41598-025-11663-x","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
A lightweight high-frequency mamba network for image super-resolution.
After continuous development, many researchers are exploring how to better utilize global and local information in single image super-resolution (SISR). Various methods based on convolutional neural network (CNN) and Transformer structures have emerged, but few studies have mentioned how to combine these two parts of information. We study the use of self-attention mechanism to integrate local and global information, aiming to make the model better balance the weights of the two parts of information. At the same time, in order to avoid the huge amount of computation brought by Transformer, we use the selective state space model VMamba to extract global information to achieve the effect of reducing computational complexity and lightweight network. Based on the above situation, we propose a High-frequency Mamba Network (HFMN) for SISR, which includes the local high-frequency extraction module Local High-Frequency Feature Block (LHFB), the global feature extraction module Mamba-Based Attention Block (MAB) based on VMamba, and the dual attention fusion module Dual-information Interactive Attention Block (DIAB). It can better incorporate local and global information and has linear complexity in the global feature extraction branch. Experiments on multiple benchmark datasets demonstrate that the network outforms recent SOTA methods in SISR while using fewer parameters. All codes are available at https://github.com/taoWuuu/HFMN .
期刊介绍:
We publish original research from all areas of the natural sciences, psychology, medicine and engineering. You can learn more about what we publish by browsing our specific scientific subject areas below or explore Scientific Reports by browsing all articles and collections.
Scientific Reports has a 2-year impact factor: 4.380 (2021), and is the 6th most-cited journal in the world, with more than 540,000 citations in 2020 (Clarivate Analytics, 2021).
•Engineering
Engineering covers all aspects of engineering, technology, and applied science. It plays a crucial role in the development of technologies to address some of the world''s biggest challenges, helping to save lives and improve the way we live.
•Physical sciences
Physical sciences are those academic disciplines that aim to uncover the underlying laws of nature — often written in the language of mathematics. It is a collective term for areas of study including astronomy, chemistry, materials science and physics.
•Earth and environmental sciences
Earth and environmental sciences cover all aspects of Earth and planetary science and broadly encompass solid Earth processes, surface and atmospheric dynamics, Earth system history, climate and climate change, marine and freshwater systems, and ecology. It also considers the interactions between humans and these systems.
•Biological sciences
Biological sciences encompass all the divisions of natural sciences examining various aspects of vital processes. The concept includes anatomy, physiology, cell biology, biochemistry and biophysics, and covers all organisms from microorganisms, animals to plants.
•Health sciences
The health sciences study health, disease and healthcare. This field of study aims to develop knowledge, interventions and technology for use in healthcare to improve the treatment of patients.