AF-Net:用于遥感影像道路提取的全尺度特征融合网络

Shide Zou, Fengchao Xiong, Haonan Luo, Jianfeng Lu, Y. Qian
{"title":"AF-Net:用于遥感影像道路提取的全尺度特征融合网络","authors":"Shide Zou, Fengchao Xiong, Haonan Luo, Jianfeng Lu, Y. Qian","doi":"10.1109/DICTA52665.2021.9647235","DOIUrl":null,"url":null,"abstract":"Road extraction from high-resolution remote sensing images (RSIs) is a challenging task due to occlusion, irregular structures, complex background, etc. A typical solution for road extraction is semantic segmentation that tries to segment the road region directly from the background region at the pixel level. Because of the narrow and slender structures of roads, high-quality multi-resolution and diverse semantic feature representations are necessary for this task. To this end, this paper introduces an all-scale feature fusion network named as AF-Net to extract roads from RSIs. AF-Net adopts an encoder-decoder architecture, whose encoder and decoder are connected by the introduced all-scale feature fusion module (AF-module). AF-module contains multiple feature fusion stages, corresponding to features of different scales. At each stage of feature fusion, all-scale all-level feature representations are employed to recursively integrate the features from two paths. One path propagates the high-resolution spatial features to the current scale feature and another path merges the current scale feature with high-level semantic features. In this way, we effectively employ all-scale features with varied spatial information and semantic information in each fusion stage, facilitating producing more accurate spatial information and richer semantic information for road extraction. Moreover, a convolutional block attention module is embedded into AF-module to suppress unconducive features from the surrounding background and improve the quality of extracted roads. Due to the features with richer semantic information and more precise spatial information, the proposed AF-Net outperforms other state-of-the-art methods on two benchmark datasets.","PeriodicalId":424950,"journal":{"name":"2021 Digital Image Computing: Techniques and Applications (DICTA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"AF-Net: All-scale Feature Fusion Network for Road Extraction from Remote Sensing Images\",\"authors\":\"Shide Zou, Fengchao Xiong, Haonan Luo, Jianfeng Lu, Y. Qian\",\"doi\":\"10.1109/DICTA52665.2021.9647235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Road extraction from high-resolution remote sensing images (RSIs) is a challenging task due to occlusion, irregular structures, complex background, etc. A typical solution for road extraction is semantic segmentation that tries to segment the road region directly from the background region at the pixel level. Because of the narrow and slender structures of roads, high-quality multi-resolution and diverse semantic feature representations are necessary for this task. To this end, this paper introduces an all-scale feature fusion network named as AF-Net to extract roads from RSIs. AF-Net adopts an encoder-decoder architecture, whose encoder and decoder are connected by the introduced all-scale feature fusion module (AF-module). AF-module contains multiple feature fusion stages, corresponding to features of different scales. At each stage of feature fusion, all-scale all-level feature representations are employed to recursively integrate the features from two paths. One path propagates the high-resolution spatial features to the current scale feature and another path merges the current scale feature with high-level semantic features. In this way, we effectively employ all-scale features with varied spatial information and semantic information in each fusion stage, facilitating producing more accurate spatial information and richer semantic information for road extraction. Moreover, a convolutional block attention module is embedded into AF-module to suppress unconducive features from the surrounding background and improve the quality of extracted roads. Due to the features with richer semantic information and more precise spatial information, the proposed AF-Net outperforms other state-of-the-art methods on two benchmark datasets.\",\"PeriodicalId\":424950,\"journal\":{\"name\":\"2021 Digital Image Computing: Techniques and Applications (DICTA)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Digital Image Computing: Techniques and Applications (DICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DICTA52665.2021.9647235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA52665.2021.9647235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

由于高分辨率遥感图像的遮挡、结构不规则、背景复杂等原因,道路提取是一项具有挑战性的任务。道路提取的一个典型解决方案是语义分割,它试图在像素级直接从背景区域分割道路区域。由于道路结构窄而细长,因此需要高质量的多分辨率和多样化的语义特征表示。为此,本文引入了一种全尺度特征融合网络AF-Net来提取道路信息。AF-Net采用编码器-解码器架构,其中编码器和解码器通过引入的全尺度特征融合模块(AF-module)连接。af模块包含多个特征融合阶段,对应不同尺度的特征。在特征融合的每个阶段,采用全尺度、全层次的特征表示,对两条路径的特征进行递归融合。一条路径将高分辨率空间特征传播到当前尺度特征,另一条路径将当前尺度特征与高级语义特征合并。这样,我们在每个融合阶段都有效地利用了空间信息和语义信息变化的全尺度特征,为道路提取提供了更准确的空间信息和更丰富的语义信息。此外,在af模块中嵌入卷积块关注模块,抑制周围背景中的不利特征,提高提取的道路质量。由于具有更丰富的语义信息和更精确的空间信息,所提出的AF-Net在两个基准数据集上优于其他最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AF-Net: All-scale Feature Fusion Network for Road Extraction from Remote Sensing Images
Road extraction from high-resolution remote sensing images (RSIs) is a challenging task due to occlusion, irregular structures, complex background, etc. A typical solution for road extraction is semantic segmentation that tries to segment the road region directly from the background region at the pixel level. Because of the narrow and slender structures of roads, high-quality multi-resolution and diverse semantic feature representations are necessary for this task. To this end, this paper introduces an all-scale feature fusion network named as AF-Net to extract roads from RSIs. AF-Net adopts an encoder-decoder architecture, whose encoder and decoder are connected by the introduced all-scale feature fusion module (AF-module). AF-module contains multiple feature fusion stages, corresponding to features of different scales. At each stage of feature fusion, all-scale all-level feature representations are employed to recursively integrate the features from two paths. One path propagates the high-resolution spatial features to the current scale feature and another path merges the current scale feature with high-level semantic features. In this way, we effectively employ all-scale features with varied spatial information and semantic information in each fusion stage, facilitating producing more accurate spatial information and richer semantic information for road extraction. Moreover, a convolutional block attention module is embedded into AF-module to suppress unconducive features from the surrounding background and improve the quality of extracted roads. Due to the features with richer semantic information and more precise spatial information, the proposed AF-Net outperforms other state-of-the-art methods on two benchmark datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信