HRSF-Net: A High-Resolution Strong Fusion Network for Pixel-Level Classification of the Thin-Stripped Target for Remote Sensing System

Lifan Zhou;Wenjie Xing;Jie Zhu;Yu Xia;Shan Zhong;Shengrong Gong
{"title":"HRSF-Net: A High-Resolution Strong Fusion Network for Pixel-Level Classification of the Thin-Stripped Target for Remote Sensing System","authors":"Lifan Zhou;Wenjie Xing;Jie Zhu;Yu Xia;Shan Zhong;Shengrong Gong","doi":"10.1109/JMASS.2023.3299330","DOIUrl":null,"url":null,"abstract":"High-resolution pixel-level classification of the roads and rivers in the remote sensing system has extremely important application value and has been a research focus which is received extensive attention from the remote sensing society. In recent years, deep convolutional neural networks (DCNNs) have been used in the pixel-level classification of remote sensing images, which has shown extraordinary performance. However, the traditional DCNNs mostly produce discontinuous and incomplete pixel-level classification results when dealing with thin-stripped roads and rivers. To solve the above problem, we put forward a high-resolution strong fusion network (abbreviated as HRSF-Net) which can keep the feature map at high resolution and minimize the texture information loss of the thin-stripped target caused by multiple downsampling operations. In addition, a pixel relationship enhancement and dual-channel attention (PRE-DCA) module is proposed to fully explore the strong correlation between the thin-stripped target pixels, and a hetero-resolution fusion (HRF) module is also proposed to better fuse the feature maps with different resolutions. The proposed HRSF-Net is examined on the two public remote sensing datasets. The ablation experimental result verifies the effectiveness of each module of the HRSF-Net. The comparative experimental result shows that the HRSF-Net has achieved mIoU of 79.05% and 64.46% on the two datasets, respectively, which both outperform some advanced pixel-level classification methods.","PeriodicalId":100624,"journal":{"name":"IEEE Journal on Miniaturization for Air and Space Systems","volume":"4 4","pages":"368-375"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Miniaturization for Air and Space Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10195987/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

High-resolution pixel-level classification of the roads and rivers in the remote sensing system has extremely important application value and has been a research focus which is received extensive attention from the remote sensing society. In recent years, deep convolutional neural networks (DCNNs) have been used in the pixel-level classification of remote sensing images, which has shown extraordinary performance. However, the traditional DCNNs mostly produce discontinuous and incomplete pixel-level classification results when dealing with thin-stripped roads and rivers. To solve the above problem, we put forward a high-resolution strong fusion network (abbreviated as HRSF-Net) which can keep the feature map at high resolution and minimize the texture information loss of the thin-stripped target caused by multiple downsampling operations. In addition, a pixel relationship enhancement and dual-channel attention (PRE-DCA) module is proposed to fully explore the strong correlation between the thin-stripped target pixels, and a hetero-resolution fusion (HRF) module is also proposed to better fuse the feature maps with different resolutions. The proposed HRSF-Net is examined on the two public remote sensing datasets. The ablation experimental result verifies the effectiveness of each module of the HRSF-Net. The comparative experimental result shows that the HRSF-Net has achieved mIoU of 79.05% and 64.46% on the two datasets, respectively, which both outperform some advanced pixel-level classification methods.
HRSF-NetNet:遥感系统薄剥离目标像素级分类的高分辨率强融合网络
遥感系统中道路和河流的高分辨率像素级分类具有极其重要的应用价值,一直是遥感界广泛关注的研究热点。近年来,深度卷积神经网络(deep convolutional neural network, DCNNs)被应用于遥感图像的像素级分类,并显示出非凡的性能。然而,传统的DCNNs在处理薄条路面和河流时,大多产生不连续和不完整的像素级分类结果。为了解决上述问题,我们提出了一种高分辨率强融合网络(简称HRSF-Net),该网络既能保持特征图的高分辨率,又能最大限度地减少多次降采样操作造成的薄剥离目标纹理信息损失。此外,提出了像素关系增强和双通道关注(PRE-DCA)模块,以充分挖掘薄剥离目标像素之间的强相关性,并提出了异分辨率融合(HRF)模块,以更好地融合不同分辨率的特征图。在两个公共遥感数据集上对所提出的HRSF-Net进行了检验。烧蚀实验结果验证了HRSF-Net各模块的有效性。对比实验结果表明,HRSF-Net在两个数据集上的mIoU分别达到79.05%和64.46%,均优于一些先进的像素级分类方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.40
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信