AFFNet: adversarial feature fusion network for super-resolution image reconstruction in remote sensing images

IF 1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC
Qian Zhao, Qianxi Yin
{"title":"AFFNet: adversarial feature fusion network for super-resolution image reconstruction in remote sensing images","authors":"Qian Zhao, Qianxi Yin","doi":"10.1117/1.jei.33.3.033032","DOIUrl":null,"url":null,"abstract":"As an important source of Earth surface information, remote sensing image has the problems of rough and fuzzy image details and poor perception quality, which affect further analysis and application of geographic information. To address the above problems, we introduce the adversarial feature fusion network with an attention-based mechanism for super-resolution reconstruction of remote sensing images in this paper. First, residual structures are designed in the generator to enhance the deep feature extraction capability of remote sensing images. The residual structure is composed of the depthwise over-parameterized convolution and self-attention mechanism, which work synergistically to extract deep feature information from remote sensing images. Second, coordinate attention feature fusion module is introduced at the feature fusion stage, which can fuse shallow features and deep high-level features. Therefore, it can enhance the attention of the model to different features and better fuse inconsistent semantic features. Finally, we design the pixel-attention upsampling module in the up-sampling stage. It adaptively focuses on the most information-rich regions of a pixel and restores the image details more accurately. We conducted extensive experiments on several remote sensing image datasets, and the results showed that compared with current advanced models, our method can better restore the details in the image and achieve good subjective visual effects, which also verifies the effectiveness and superiority of the algorithm proposed in this paper.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electronic Imaging","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1117/1.jei.33.3.033032","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

As an important source of Earth surface information, remote sensing image has the problems of rough and fuzzy image details and poor perception quality, which affect further analysis and application of geographic information. To address the above problems, we introduce the adversarial feature fusion network with an attention-based mechanism for super-resolution reconstruction of remote sensing images in this paper. First, residual structures are designed in the generator to enhance the deep feature extraction capability of remote sensing images. The residual structure is composed of the depthwise over-parameterized convolution and self-attention mechanism, which work synergistically to extract deep feature information from remote sensing images. Second, coordinate attention feature fusion module is introduced at the feature fusion stage, which can fuse shallow features and deep high-level features. Therefore, it can enhance the attention of the model to different features and better fuse inconsistent semantic features. Finally, we design the pixel-attention upsampling module in the up-sampling stage. It adaptively focuses on the most information-rich regions of a pixel and restores the image details more accurately. We conducted extensive experiments on several remote sensing image datasets, and the results showed that compared with current advanced models, our method can better restore the details in the image and achieve good subjective visual effects, which also verifies the effectiveness and superiority of the algorithm proposed in this paper.
AFFNet:用于遥感图像超分辨率图像重建的对抗特征融合网络
遥感图像作为地球表面信息的重要来源,存在图像细节粗糙模糊、感知质量差等问题,影响了地理信息的进一步分析和应用。针对上述问题,我们在本文中引入了基于注意力机制的对抗特征融合网络,用于遥感图像的超分辨率重建。首先,在生成器中设计了残差结构,以增强遥感图像的深度特征提取能力。残差结构由深度超参数化卷积和自注意机制组成,两者协同工作,提取遥感图像的深度特征信息。其次,在特征融合阶段引入了坐标注意特征融合模块,该模块可以融合浅层特征和深层高层特征。因此,它可以提高模型对不同特征的关注度,更好地融合不一致的语义特征。最后,我们在上采样阶段设计了像素注意力上采样模块。它能自适应地关注像素中信息最丰富的区域,更准确地还原图像细节。我们在多个遥感图像数据集上进行了大量实验,结果表明,与目前先进的模型相比,我们的方法能更好地还原图像细节,达到良好的主观视觉效果,这也验证了本文提出的算法的有效性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Electronic Imaging
Journal of Electronic Imaging 工程技术-成像科学与照相技术
CiteScore
1.70
自引率
27.30%
发文量
341
审稿时长
4.0 months
期刊介绍: The Journal of Electronic Imaging publishes peer-reviewed papers in all technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, and applications of electronic imaging systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信