Swin变压器嵌入MSMDFFNet用于遥感图像道路提取

Yuchuan Wang;Ling Tong;Jiaxing Yang;Shangtao Qin
{"title":"Swin变压器嵌入MSMDFFNet用于遥感图像道路提取","authors":"Yuchuan Wang;Ling Tong;Jiaxing Yang;Shangtao Qin","doi":"10.1109/LGRS.2025.3552763","DOIUrl":null,"url":null,"abstract":"Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: <uri>https://github.com/wycloveinfall/SwinMSMDFFNet</uri>.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Swin Transformer Embedding MSMDFFNet for Road Extraction From Remote Sensing Images\",\"authors\":\"Yuchuan Wang;Ling Tong;Jiaxing Yang;Shangtao Qin\",\"doi\":\"10.1109/LGRS.2025.3552763\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: <uri>https://github.com/wycloveinfall/SwinMSMDFFNet</uri>.\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"22 \",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10933973/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10933973/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景道路特征和多尺度空间语义信息在遥感影像道路提取中起着至关重要的作用。然而,使用当前基于卷积神经网络(CNN)的道路提取算法准确建模这些基本特征仍然具有挑战性,导致闭塞区域的道路碎片化。受自然语言处理(NLP)中变压器自注意机制的启发,我们提出了一种结合Swin Transformer (SwinMSMDFFNet)的创新型MSMDFFNet,用于RS图像的道路提取。首先,Swin Transformer作为辅助编码器嵌入到MSMDFFNet中,以合并必要的自关注机制。同时,引入了多粒度采样(MGS)模块,增强了Swin变压器在多粒度下的自关注计算能力。该模块专门将主编码器产生的特征映射转换为辅助编码器的合适输入。此外,为了增强辅助编码器中相邻局部窗口之间的连接,设计了一个交叉方向融合(cross-directional fusion, CDF)模块,将辅助编码器的特征反馈给主编码器。在DeepGlobe和LSRV数据集上进行的大量实验表明,我们提出的SwinMSMDFFNet在提取道路结构方面具有显着优势,特别是在远距离遮挡区域。它超越了现有的像素级度量方法,如F1分数、交联(IoU)和连接度量平均路径长度相似度(api)。该代码将在https://github.com/wycloveinfall/SwinMSMDFFNet上公开发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Swin Transformer Embedding MSMDFFNet for Road Extraction From Remote Sensing Images
Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: https://github.com/wycloveinfall/SwinMSMDFFNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信