Hanguang Xiao , Hao Wen , Xin Wang , Kun Zuo , Tianqi Liu , Wei Wang , Yong Xu
{"title":"HAMSA: Hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution","authors":"Hanguang Xiao , Hao Wen , Xin Wang , Kun Zuo , Tianqi Liu , Wei Wang , Yong Xu","doi":"10.1016/j.dsp.2025.105098","DOIUrl":null,"url":null,"abstract":"<div><div>Video Super-Resolution (VSR) aims to enhance the resolution of video frames by utilizing multiple adjacent low-resolution frames. For across-frame information extraction, most existing methods usually employ the optical flow or learned offsets through deformable convolution to perform alignment. However, due to the complexity of real-world motions, the estimating of flow or motion offsets can be inaccurate while challenging. To address this problem, we propose a novel hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution, named HAMSA. The proposed HAMSA adopts a U-shaped architecture to achieve progressive alignment using a multi-scale manner. Specifically, we develop a hybrid attention transformer (HAT) feature extraction module, which uses the proposed channel motion attention (CMA) to extract features that facilitate inter-frame alignment. Second, we first design a U-shaped multi-scale feature alignment (MSFA) module that ensures precise motion estimation between different frames by starting from large-scale features, gradually aligning them to smaller scales, and then restoring them using skip connections and upsampling. In addition, to further refine the alignment process, we introduce a non-local feature aggregation (NLFA) module, which serves to apply non-local operations to minimize alignment errors and enhance the detail fidelity, thereby improving the overall quality of the super-resolved video frames. Extensive experiments on the Vid4, Vimeo90k-T, and REDS4 datasets demonstrate that our HAMSA achieves superior VSR performance compared to other state-of-the-art (SOTA) methods while maintaining a good balance between model size and performance.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"161 ","pages":"Article 105098"},"PeriodicalIF":2.9000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425001204","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Video Super-Resolution (VSR) aims to enhance the resolution of video frames by utilizing multiple adjacent low-resolution frames. For across-frame information extraction, most existing methods usually employ the optical flow or learned offsets through deformable convolution to perform alignment. However, due to the complexity of real-world motions, the estimating of flow or motion offsets can be inaccurate while challenging. To address this problem, we propose a novel hybrid attention transformer and multi-scale alignment aggregation network for video super-resolution, named HAMSA. The proposed HAMSA adopts a U-shaped architecture to achieve progressive alignment using a multi-scale manner. Specifically, we develop a hybrid attention transformer (HAT) feature extraction module, which uses the proposed channel motion attention (CMA) to extract features that facilitate inter-frame alignment. Second, we first design a U-shaped multi-scale feature alignment (MSFA) module that ensures precise motion estimation between different frames by starting from large-scale features, gradually aligning them to smaller scales, and then restoring them using skip connections and upsampling. In addition, to further refine the alignment process, we introduce a non-local feature aggregation (NLFA) module, which serves to apply non-local operations to minimize alignment errors and enhance the detail fidelity, thereby improving the overall quality of the super-resolved video frames. Extensive experiments on the Vid4, Vimeo90k-T, and REDS4 datasets demonstrate that our HAMSA achieves superior VSR performance compared to other state-of-the-art (SOTA) methods while maintaining a good balance between model size and performance.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,