{"title":"Image super-resolution reconstruction based on residual compensation combined attention network","authors":"Xiyao Li","doi":"10.23977/jeis.2023.080107","DOIUrl":null,"url":null,"abstract":": For image reconstruction, the residual network ignores part of the residual information when extracting features. We propose an image super-resolution reconstruction based on residual compensation joint attention network (RCCN). Firstly, we construct a three-way residual network for compensating the feature information of the standard residual network; secondly, we design a joint attention module to complement the pixel-level image attention information by 3D attention while the channel attention learns the channel weight information; finally, our method has clearer results compared with other advanced methods, and the objective evaluation indexes are all greatly improved.","PeriodicalId":32534,"journal":{"name":"Journal of Electronics and Information Science","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electronics and Information Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23977/jeis.2023.080107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
: For image reconstruction, the residual network ignores part of the residual information when extracting features. We propose an image super-resolution reconstruction based on residual compensation joint attention network (RCCN). Firstly, we construct a three-way residual network for compensating the feature information of the standard residual network; secondly, we design a joint attention module to complement the pixel-level image attention information by 3D attention while the channel attention learns the channel weight information; finally, our method has clearer results compared with other advanced methods, and the objective evaluation indexes are all greatly improved.