GLD-Road: A global–local decoding road network extraction model for remote sensing images

IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Ligao Deng, Yupeng Deng, Yu Meng, Jingbo Chen, Zhihao Xi, Diyou Liu, Qifeng Chu
{"title":"GLD-Road: A global–local decoding road network extraction model for remote sensing images","authors":"Ligao Deng, Yupeng Deng, Yu Meng, Jingbo Chen, Zhihao Xi, Diyou Liu, Qifeng Chu","doi":"10.1016/j.isprsjprs.2025.07.026","DOIUrl":null,"url":null,"abstract":"Road networks are essential information for map updates, autonomous driving, and disaster response. However, manual annotation of road networks from remote sensing imagery is time-consuming and costly, whereas deep learning methods have gained attention for their efficiency and precision in road extraction. Current deep learning approaches for road network extraction fall into three main categories: postprocessing methods based on semantic segmentation results, global parallel methods and local iterative methods. Postprocessing methods introduce quantization errors, leading to higher overall road network inaccuracies; global parallel methods achieve high extraction efficiency but risk road node omissions; local iterative methods excel in node detection but have relatively lower extraction efficiency. To address the above limitations, We propose a two-stage road extraction model with global–local decoding, named GLD-Road, which possesses the high efficiency of global parallel methods and the strong node perception capability of local iterative methods, enabling a significant reduction in inference time while maintaining high-precision road network extraction. In the first stage, GLD-Road extracts the coordinates and direction descriptors of road nodes using global information from the entire input image. Subsequently, it connects adjacent nodes using a self-designed graph network module (Connect Module) to form the initial road network. In the second stage, based on the road endpoints contained in the initial road network, GLD-Road iteratively searches local images and the local grid map of the primary network to repair broken roads, ultimately producing a complete road network. Since the second stage only requires limited supplementary detection of locally missing nodes, GLD-Road significantly reduces the global iterative search range over the entire image, leading to a substantial reduction in retrieval time compared to local iterative methods. Finally, experimental results revealed that GLD-Road outperformed current state-of-the-art methods, achieving improvements of 1.9% and 0.67% in average path length similarity (APLS) on the City-Scale and SpaceNet3 datasets, respectively. Moreover, compared with those of a global parallel method (Sat2Graph) and a local iterative method (RNGDet++), the retrieval time of GLD-Road exhibited reductions of 40% and 92%, respectively, suggesting that GLD-Road achieves a pronounced improvement in road network extraction efficiency compared to existing methods. The experimental results are available at <ce:inter-ref xlink:href=\"https://github.com/ucas-dlg/GLD-Road\" xlink:type=\"simple\">https://github.com/ucas-dlg/GLD-Road</ce:inter-ref>.","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"31 1","pages":""},"PeriodicalIF":12.2000,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.isprsjprs.2025.07.026","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Road networks are essential information for map updates, autonomous driving, and disaster response. However, manual annotation of road networks from remote sensing imagery is time-consuming and costly, whereas deep learning methods have gained attention for their efficiency and precision in road extraction. Current deep learning approaches for road network extraction fall into three main categories: postprocessing methods based on semantic segmentation results, global parallel methods and local iterative methods. Postprocessing methods introduce quantization errors, leading to higher overall road network inaccuracies; global parallel methods achieve high extraction efficiency but risk road node omissions; local iterative methods excel in node detection but have relatively lower extraction efficiency. To address the above limitations, We propose a two-stage road extraction model with global–local decoding, named GLD-Road, which possesses the high efficiency of global parallel methods and the strong node perception capability of local iterative methods, enabling a significant reduction in inference time while maintaining high-precision road network extraction. In the first stage, GLD-Road extracts the coordinates and direction descriptors of road nodes using global information from the entire input image. Subsequently, it connects adjacent nodes using a self-designed graph network module (Connect Module) to form the initial road network. In the second stage, based on the road endpoints contained in the initial road network, GLD-Road iteratively searches local images and the local grid map of the primary network to repair broken roads, ultimately producing a complete road network. Since the second stage only requires limited supplementary detection of locally missing nodes, GLD-Road significantly reduces the global iterative search range over the entire image, leading to a substantial reduction in retrieval time compared to local iterative methods. Finally, experimental results revealed that GLD-Road outperformed current state-of-the-art methods, achieving improvements of 1.9% and 0.67% in average path length similarity (APLS) on the City-Scale and SpaceNet3 datasets, respectively. Moreover, compared with those of a global parallel method (Sat2Graph) and a local iterative method (RNGDet++), the retrieval time of GLD-Road exhibited reductions of 40% and 92%, respectively, suggesting that GLD-Road achieves a pronounced improvement in road network extraction efficiency compared to existing methods. The experimental results are available at https://github.com/ucas-dlg/GLD-Road.
GLD-Road:一种面向遥感图像的全局-局部解码路网提取模型
道路网络是地图更新、自动驾驶和灾难响应的重要信息。然而,人工从遥感影像中标注道路网耗时长、成本高,而深度学习方法在道路提取方面以其高效性和精度而备受关注。目前用于道路网络提取的深度学习方法主要分为三类:基于语义分割结果的后处理方法、全局并行方法和局部迭代方法。后处理方法引入量化误差,导致路网整体不精度更高;全局并行方法提取效率高,但存在道路节点遗漏的风险;局部迭代方法在节点检测方面具有优势,但提取效率相对较低。针对上述局限性,本文提出了一种具有全局-局部解码的两阶段道路提取模型,命名为GLD-Road,该模型具有全局并行方法的高效率和局部迭代方法的强节点感知能力,在保持高精度道路网络提取的同时显著减少了推理时间。在第一阶段,GLD-Road利用整个输入图像的全局信息提取道路节点的坐标和方向描述符。随后,使用自行设计的图形网络模块(Connect module)连接相邻节点,形成初始路网。第二阶段,GLD-Road基于初始路网中包含的道路端点,迭代搜索初级路网的局部图像和局部网格图,修复破损的道路,最终生成完整的路网。由于第二阶段只需要对局部缺失节点进行有限的补充检测,因此GLD-Road显著减小了整个图像的全局迭代搜索范围,与局部迭代方法相比,大大减少了检索时间。最后,实验结果表明,GLD-Road优于目前最先进的方法,在City-Scale和SpaceNet3数据集上的平均路径长度相似度(apl)分别提高了1.9%和0.67%。此外,与全局并行方法(Sat2Graph)和局部迭代方法(rngde++)相比,GLD-Road的检索时间分别减少了40%和92%,表明GLD-Road在道路网络提取效率方面比现有方法有了明显的提高。实验结果可在https://github.com/ucas-dlg/GLD-Road上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信