Yifei Duan , Dan Yang , Xiaochen Qu , Le Zhang , Junsuo Qu , Lu Chao , Peilu Gan
{"title":"LightCGS-Net: A novel lightweight road extraction method for remote sensing images combining global semantics and spatial details","authors":"Yifei Duan , Dan Yang , Xiaochen Qu , Le Zhang , Junsuo Qu , Lu Chao , Peilu Gan","doi":"10.1016/j.jnca.2025.104247","DOIUrl":null,"url":null,"abstract":"<div><div>Road extraction remains a critical area of study within the realm of remote sensing image. The simultaneous extraction of accurate global semantic and spatial detail features from such images poses a significant challenge in contemporary research. This paper introduced a novel road extraction methodology, termed LightCGS-Net, which integrated a spatial detail branch derived from a lightweight CNN and a global semantic branch based on an enhanced Swin Transformer within the encoder. The spatial detail branch enhances the accuracy of tiny road extractions, while the global semantic branch preserves comprehensive contextual information. To address the noise interference caused by the extensive redundant data produced by the dual-branch fusion, Gaussian filtering was employed. The Lightweight Parallel Channel and Space Attention Mechanism (PCSAM) in the skip connection addresses road discontinuities caused by tree and building obstructions. The Feature Fusion Mechanism (FFM) module, using dilated convolution, captures edge features and provides precise structural road information, with distinct loss functions for primary and edge roads. Testing on the DeepGlobe, Massachusetts, and SpaceNet 03-05 datasets showed that our network performed well compared to traditional road extraction networks, surpassing them in evaluation scores and segmentation effectiveness, despite having more parameters and computational time. LightCGS-Net stands out in road networks using Transformer technology due to its lower parameter (43.78M) and flops (196.09G). It performs well in IoU metrics and maintains competitiveness in Recall, F1-score, and Accuracy. It shows superior generalization and exceeds traditional semantic segmentation methods in overall effectiveness.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"242 ","pages":"Article 104247"},"PeriodicalIF":7.7000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Network and Computer Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1084804525001444","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Road extraction remains a critical area of study within the realm of remote sensing image. The simultaneous extraction of accurate global semantic and spatial detail features from such images poses a significant challenge in contemporary research. This paper introduced a novel road extraction methodology, termed LightCGS-Net, which integrated a spatial detail branch derived from a lightweight CNN and a global semantic branch based on an enhanced Swin Transformer within the encoder. The spatial detail branch enhances the accuracy of tiny road extractions, while the global semantic branch preserves comprehensive contextual information. To address the noise interference caused by the extensive redundant data produced by the dual-branch fusion, Gaussian filtering was employed. The Lightweight Parallel Channel and Space Attention Mechanism (PCSAM) in the skip connection addresses road discontinuities caused by tree and building obstructions. The Feature Fusion Mechanism (FFM) module, using dilated convolution, captures edge features and provides precise structural road information, with distinct loss functions for primary and edge roads. Testing on the DeepGlobe, Massachusetts, and SpaceNet 03-05 datasets showed that our network performed well compared to traditional road extraction networks, surpassing them in evaluation scores and segmentation effectiveness, despite having more parameters and computational time. LightCGS-Net stands out in road networks using Transformer technology due to its lower parameter (43.78M) and flops (196.09G). It performs well in IoU metrics and maintains competitiveness in Recall, F1-score, and Accuracy. It shows superior generalization and exceeds traditional semantic segmentation methods in overall effectiveness.
期刊介绍:
The Journal of Network and Computer Applications welcomes research contributions, surveys, and notes in all areas relating to computer networks and applications thereof. Sample topics include new design techniques, interesting or novel applications, components or standards; computer networks with tools such as WWW; emerging standards for internet protocols; Wireless networks; Mobile Computing; emerging computing models such as cloud computing, grid computing; applications of networked systems for remote collaboration and telemedicine, etc. The journal is abstracted and indexed in Scopus, Engineering Index, Web of Science, Science Citation Index Expanded and INSPEC.