{"title":"Partial Attention Feature Aggregation Network for Lightweight Remote Sensing Image Super-Resolution","authors":"Wei Xue;Tiancheng Shao;Mingyang Du;Xiao Zheng;Ping Zhong","doi":"10.1109/LGRS.2025.3601595","DOIUrl":null,"url":null,"abstract":"Most lightweight super-resolution networks are designed to improve performance by introducing an attention mechanism and to reduce model parameters by designing lightweight convolutional layers. However, the introduction of the attention mechanism often leads to an increase in the number of parameters. In addition, the lightweight convolutional layer has a limited receptive field and cannot effectively capture long-range dependencies. In this letter, we design a novel lightweight base module called partial attention convolution (PAConv) and develop three variants of PAConv with different receptive fields to collaboratively exploit nonlocal information. Based on PAConv, we further propose a lightweight super-resolution network called partial attention feature aggregation network (PAFAN). Specifically, we arrange the PAConv variants in a progressive iterative manner to form the attention progressive feature distillation block (APFDB), which aims to gradually optimize the distilled features. Furthermore, we construct a multilevel aggregation spatial attention (MASA) via a stacking of the PAConv variants to systematically coordinate multiscale structural information. Extensive experiments conducted on benchmark datasets show that PAFAN achieves an optimal balance between reconstruction quality and computational efficiency. In particular, with only 123 K parameters and 0.49G FLOPs, PAFAN can maintain a performance comparable to that of SOTA methods.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11134442/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Most lightweight super-resolution networks are designed to improve performance by introducing an attention mechanism and to reduce model parameters by designing lightweight convolutional layers. However, the introduction of the attention mechanism often leads to an increase in the number of parameters. In addition, the lightweight convolutional layer has a limited receptive field and cannot effectively capture long-range dependencies. In this letter, we design a novel lightweight base module called partial attention convolution (PAConv) and develop three variants of PAConv with different receptive fields to collaboratively exploit nonlocal information. Based on PAConv, we further propose a lightweight super-resolution network called partial attention feature aggregation network (PAFAN). Specifically, we arrange the PAConv variants in a progressive iterative manner to form the attention progressive feature distillation block (APFDB), which aims to gradually optimize the distilled features. Furthermore, we construct a multilevel aggregation spatial attention (MASA) via a stacking of the PAConv variants to systematically coordinate multiscale structural information. Extensive experiments conducted on benchmark datasets show that PAFAN achieves an optimal balance between reconstruction quality and computational efficiency. In particular, with only 123 K parameters and 0.49G FLOPs, PAFAN can maintain a performance comparable to that of SOTA methods.