Varsha Singh , Naresh Vedhamuru , R. Malmathanraj , P. Palanisamy
{"title":"单图像超分辨率多尺度注意残差卷积神经网络(MSARCNN)","authors":"Varsha Singh , Naresh Vedhamuru , R. Malmathanraj , P. Palanisamy","doi":"10.1016/j.dsp.2025.105614","DOIUrl":null,"url":null,"abstract":"<div><div>Traditional super-resolution methods often struggle to capture fine details and extract features, especially at higher frequency which leads to poor reconstruction of images. Further some SR methods neglect the significance of complexity while designing deeper networks. Deeper networks are challenging to train and have greater computational load which limits the performance of SR method making it less compatible for other devices. To address this problem, we propose a novel Multi-Scale Attention Residual Convolutional Neural Network(MSARCNN). The model combines eight multi-scale attention residual convolution and a Dilated Convolution Block(DCB). Each MSARCB comprises of a squeeze and excitation block which recalibrates feature maps by emphasizing informative channels and a Pixel Attention Block(PAB) which utilizes attention-based weighting to enhance local feature representation. The MSARCB employs multi-scale hierarchical feature extraction with the help of parallel convolution layers with varying channels and DCB with dilation rates of 1,3,5 and 7 which helps in capturing both spatial features and fine details by enlarging the effective receptive field without increasing the number of learnable parameters. Experiments on four benchmark dataset demonstrate that the proposed model significantly outperforms other state-of-the-art lightweight SR methods, providing a exceptional balance of reconstruction performance, model complexity and parameter count.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105614"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-scale Attention Residual Convolution Neural Network for Single Image Super Resolution (MSARCNN)\",\"authors\":\"Varsha Singh , Naresh Vedhamuru , R. Malmathanraj , P. Palanisamy\",\"doi\":\"10.1016/j.dsp.2025.105614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Traditional super-resolution methods often struggle to capture fine details and extract features, especially at higher frequency which leads to poor reconstruction of images. Further some SR methods neglect the significance of complexity while designing deeper networks. Deeper networks are challenging to train and have greater computational load which limits the performance of SR method making it less compatible for other devices. To address this problem, we propose a novel Multi-Scale Attention Residual Convolutional Neural Network(MSARCNN). The model combines eight multi-scale attention residual convolution and a Dilated Convolution Block(DCB). Each MSARCB comprises of a squeeze and excitation block which recalibrates feature maps by emphasizing informative channels and a Pixel Attention Block(PAB) which utilizes attention-based weighting to enhance local feature representation. The MSARCB employs multi-scale hierarchical feature extraction with the help of parallel convolution layers with varying channels and DCB with dilation rates of 1,3,5 and 7 which helps in capturing both spatial features and fine details by enlarging the effective receptive field without increasing the number of learnable parameters. Experiments on four benchmark dataset demonstrate that the proposed model significantly outperforms other state-of-the-art lightweight SR methods, providing a exceptional balance of reconstruction performance, model complexity and parameter count.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"168 \",\"pages\":\"Article 105614\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200425006360\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425006360","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Multi-scale Attention Residual Convolution Neural Network for Single Image Super Resolution (MSARCNN)
Traditional super-resolution methods often struggle to capture fine details and extract features, especially at higher frequency which leads to poor reconstruction of images. Further some SR methods neglect the significance of complexity while designing deeper networks. Deeper networks are challenging to train and have greater computational load which limits the performance of SR method making it less compatible for other devices. To address this problem, we propose a novel Multi-Scale Attention Residual Convolutional Neural Network(MSARCNN). The model combines eight multi-scale attention residual convolution and a Dilated Convolution Block(DCB). Each MSARCB comprises of a squeeze and excitation block which recalibrates feature maps by emphasizing informative channels and a Pixel Attention Block(PAB) which utilizes attention-based weighting to enhance local feature representation. The MSARCB employs multi-scale hierarchical feature extraction with the help of parallel convolution layers with varying channels and DCB with dilation rates of 1,3,5 and 7 which helps in capturing both spatial features and fine details by enlarging the effective receptive field without increasing the number of learnable parameters. Experiments on four benchmark dataset demonstrate that the proposed model significantly outperforms other state-of-the-art lightweight SR methods, providing a exceptional balance of reconstruction performance, model complexity and parameter count.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,