Swin Transformer Embedding MSMDFFNet for Road Extraction From Remote Sensing Images

Yuchuan Wang;Ling Tong;Jiaxing Yang;Shangtao Qin
{"title":"Swin Transformer Embedding MSMDFFNet for Road Extraction From Remote Sensing Images","authors":"Yuchuan Wang;Ling Tong;Jiaxing Yang;Shangtao Qin","doi":"10.1109/LGRS.2025.3552763","DOIUrl":null,"url":null,"abstract":"Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: <uri>https://github.com/wycloveinfall/SwinMSMDFFNet</uri>.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10933973/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: https://github.com/wycloveinfall/SwinMSMDFFNet.
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信