{"title":"Swin Transformer Embedding MSMDFFNet for Road Extraction From Remote Sensing Images","authors":"Yuchuan Wang;Ling Tong;Jiaxing Yang;Shangtao Qin","doi":"10.1109/LGRS.2025.3552763","DOIUrl":null,"url":null,"abstract":"Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: <uri>https://github.com/wycloveinfall/SwinMSMDFFNet</uri>.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10933973/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Contextual road features and multiscale spatial semantic information play a vital role in road extraction from remote sensing (RS) images. However, accurately modeling these essential features with current convolutional neural network (CNN)-based road extraction algorithms remains challenging, leading to fragmented roads in occluded areas. Inspired by the self-attention mechanism of transformers in natural language processing (NLP), we propose an innovative MSMDFFNet in conjunction with Swin Transformer (SwinMSMDFFNet) for road extraction from RS images. First, the Swin Transformer is embedded as an auxiliary encoder into the MSMDFFNet to incorporate necessary self-attention mechanisms. Meanwhile, a multigranularity sampling (MGS) module is introduced to enhance the computation of self-attention at multiple granularities by the Swin Transformer. This module specifically transforms the feature maps produced by the main encoder into suitable inputs for the auxiliary encoder. Furthermore, to enhance the connections between adjacent local windows in the auxiliary encoder, a cross-directional fusion (CDF) module is designed for feeding the features of the auxiliary encoder back into the main encoder. Extensive experiments conducted on the DeepGlobe and LSRV datasets demonstrate that our proposed SwinMSMDFFNet has significant advantages in extracting road structure, particularly in areas with long-distance occlusions. It surpasses existing methods in pixel-level metrics such as F1 score, intersection over union (IoU), and connectivity metric average path length similarity (APLS). The code will be made publicly available at: https://github.com/wycloveinfall/SwinMSMDFFNet.