{"title":"用于模拟单张图像超级分辨率的跨维注意力融合网络","authors":"Jingbo He;Xiaohai He;Shuhua Xiong;Honggang Chen","doi":"10.1109/TBC.2024.3408643","DOIUrl":null,"url":null,"abstract":"Single image super-resolution (SISR) is a task of reconstructing high-resolution (HR) images from low-resolution (LR) images, which are obtained by some degradation process. Deep neural networks (DNNs) have greatly advanced the frontier of image super-resolution research and replaced traditional methods as the de facto standard approach. The attention mechanism enables the SR algorithms to achieve breakthrough performance after another. However, limited research has been conducted on the interaction and integration of attention mechanisms across different dimensions. To tackle this issue, in this paper, we propose a cross-dimensional attention fusion network (CAFN) to effectively achieve cross-dimensional inter-action with long-range dependencies. Specifically, the proposed approach involves the utilization of a cross-dimensional aggrega-tion module (CAM) to effectively capture contextual information by integrating both spatial and channel importance maps. The design of information fusion module (IFM) in CAM serves as a bridge for parallel dual-attention information fusion. In addition, a novel memory-adaptive multi-stage (MAMS) training method is proposed. We perform warm-start retraining with the same setting as the previous stage, without increasing memory consumption. If the memory is sufficient, we finetune the model with a larger patch size after the warm-start. The experimental results definitively demonstrate the superior performance of our cross-dimensional attention fusion network and training strategy compared to state-of-the-art (SOTA) methods, as evidenced by both quantitative and qualitative metrics.","PeriodicalId":13159,"journal":{"name":"IEEE Transactions on Broadcasting","volume":"70 3","pages":"909-923"},"PeriodicalIF":3.2000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Dimensional Attention Fusion Network for Simulated Single Image Super-Resolution\",\"authors\":\"Jingbo He;Xiaohai He;Shuhua Xiong;Honggang Chen\",\"doi\":\"10.1109/TBC.2024.3408643\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Single image super-resolution (SISR) is a task of reconstructing high-resolution (HR) images from low-resolution (LR) images, which are obtained by some degradation process. Deep neural networks (DNNs) have greatly advanced the frontier of image super-resolution research and replaced traditional methods as the de facto standard approach. The attention mechanism enables the SR algorithms to achieve breakthrough performance after another. However, limited research has been conducted on the interaction and integration of attention mechanisms across different dimensions. To tackle this issue, in this paper, we propose a cross-dimensional attention fusion network (CAFN) to effectively achieve cross-dimensional inter-action with long-range dependencies. Specifically, the proposed approach involves the utilization of a cross-dimensional aggrega-tion module (CAM) to effectively capture contextual information by integrating both spatial and channel importance maps. The design of information fusion module (IFM) in CAM serves as a bridge for parallel dual-attention information fusion. In addition, a novel memory-adaptive multi-stage (MAMS) training method is proposed. We perform warm-start retraining with the same setting as the previous stage, without increasing memory consumption. If the memory is sufficient, we finetune the model with a larger patch size after the warm-start. The experimental results definitively demonstrate the superior performance of our cross-dimensional attention fusion network and training strategy compared to state-of-the-art (SOTA) methods, as evidenced by both quantitative and qualitative metrics.\",\"PeriodicalId\":13159,\"journal\":{\"name\":\"IEEE Transactions on Broadcasting\",\"volume\":\"70 3\",\"pages\":\"909-923\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Broadcasting\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10571547/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Broadcasting","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10571547/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
单幅图像超分辨率(SISR)是一项从低分辨率(LR)图像重建高分辨率(HR)图像的任务,而低分辨率(LR)图像是通过一定的降解过程获得的。深度神经网络(DNN)极大地推动了图像超分辨率研究的前沿发展,并取代传统方法成为事实上的标准方法。注意力机制使 SR 算法取得了一个又一个突破性的性能。然而,关于注意力机制在不同维度上的交互与融合的研究还很有限。为解决这一问题,我们在本文中提出了一种跨维注意力融合网络(CAFN),以有效实现具有长程依赖性的跨维交互作用。具体来说,所提出的方法包括利用跨维聚合模块(CAM),通过整合空间和通道重要性图来有效捕捉上下文信息。CAM 中信息融合模块(IFM)的设计可作为并行双注意信息融合的桥梁。此外,我们还提出了一种新颖的记忆自适应多阶段(MAMS)训练方法。我们在不增加内存消耗的情况下,以与前一阶段相同的设置执行热启动再训练。如果内存充足,我们会在热启动后使用更大的补丁尺寸对模型进行微调。实验结果从定量和定性指标两方面明确证明,与最先进的(SOTA)方法相比,我们的跨维注意力融合网络和训练策略具有更优越的性能。
Cross-Dimensional Attention Fusion Network for Simulated Single Image Super-Resolution
Single image super-resolution (SISR) is a task of reconstructing high-resolution (HR) images from low-resolution (LR) images, which are obtained by some degradation process. Deep neural networks (DNNs) have greatly advanced the frontier of image super-resolution research and replaced traditional methods as the de facto standard approach. The attention mechanism enables the SR algorithms to achieve breakthrough performance after another. However, limited research has been conducted on the interaction and integration of attention mechanisms across different dimensions. To tackle this issue, in this paper, we propose a cross-dimensional attention fusion network (CAFN) to effectively achieve cross-dimensional inter-action with long-range dependencies. Specifically, the proposed approach involves the utilization of a cross-dimensional aggrega-tion module (CAM) to effectively capture contextual information by integrating both spatial and channel importance maps. The design of information fusion module (IFM) in CAM serves as a bridge for parallel dual-attention information fusion. In addition, a novel memory-adaptive multi-stage (MAMS) training method is proposed. We perform warm-start retraining with the same setting as the previous stage, without increasing memory consumption. If the memory is sufficient, we finetune the model with a larger patch size after the warm-start. The experimental results definitively demonstrate the superior performance of our cross-dimensional attention fusion network and training strategy compared to state-of-the-art (SOTA) methods, as evidenced by both quantitative and qualitative metrics.
期刊介绍:
The Society’s Field of Interest is “Devices, equipment, techniques and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.” In addition to this formal FOI statement, which is used to provide guidance to the Publications Committee in the selection of content, the AdCom has further resolved that “broadcast systems includes all aspects of transmission, propagation, and reception.”