Fan He , Shijie Liu , Sicong Liu , Yanmin Jin , Huan Xie , Xiaohua Tong
{"title":"hroad:一种具有混合注意和方向先验的编码器-解码器架构,用于从遥感图像中有效地提取道路","authors":"Fan He , Shijie Liu , Sicong Liu , Yanmin Jin , Huan Xie , Xiaohua Tong","doi":"10.1016/j.isprsjprs.2025.06.014","DOIUrl":null,"url":null,"abstract":"<div><div>Road extraction from very high resolution (VHR) remote sensing images (RSIs) presents significant challenges due to the varied morphology and high semantic complexity of road structures. Many existing methods struggle to consistently perform well across diverse and complex scenarios. Additionally, balancing efficiency and performance remains an unresolved issue in prior research, particularly those employing transformers. To address these challenges, we propose HDRoad, a novel encoder-decoder architecture that improves both model performance and computational efficiency, enabling training and inference on high-resolution inputs with a single GPU. The encoder, Hybrid Attention Network (HA-Net), combines dense and sparse spatial attention to effectively distill semantic information. The decoder, Directional Augmented Road Morphology Extraction Network (DARMEN), uses morphological priors to accurately refine and reconstruct road features. The model is validated on the DeepGlobe dataset and SouthernChina12k, a newly developed road segmentation dataset comprising 11,791 images from various remote sensing sources. Experimental results demonstrate that HDRoad achieves an IoU of 75.09 % on the DeepGlobe dataset and is the only model exceeding 60 % IoU on SouthernChina12k, setting new benchmarks for state-of-the-art performance in the field.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"227 ","pages":"Pages 251-264"},"PeriodicalIF":10.6000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HDRoad: An encoder-decoder architecture with hybrid attention and directional prior for efficient road extraction from remote sensing images\",\"authors\":\"Fan He , Shijie Liu , Sicong Liu , Yanmin Jin , Huan Xie , Xiaohua Tong\",\"doi\":\"10.1016/j.isprsjprs.2025.06.014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Road extraction from very high resolution (VHR) remote sensing images (RSIs) presents significant challenges due to the varied morphology and high semantic complexity of road structures. Many existing methods struggle to consistently perform well across diverse and complex scenarios. Additionally, balancing efficiency and performance remains an unresolved issue in prior research, particularly those employing transformers. To address these challenges, we propose HDRoad, a novel encoder-decoder architecture that improves both model performance and computational efficiency, enabling training and inference on high-resolution inputs with a single GPU. The encoder, Hybrid Attention Network (HA-Net), combines dense and sparse spatial attention to effectively distill semantic information. The decoder, Directional Augmented Road Morphology Extraction Network (DARMEN), uses morphological priors to accurately refine and reconstruct road features. The model is validated on the DeepGlobe dataset and SouthernChina12k, a newly developed road segmentation dataset comprising 11,791 images from various remote sensing sources. Experimental results demonstrate that HDRoad achieves an IoU of 75.09 % on the DeepGlobe dataset and is the only model exceeding 60 % IoU on SouthernChina12k, setting new benchmarks for state-of-the-art performance in the field.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"227 \",\"pages\":\"Pages 251-264\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2025-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271625002400\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625002400","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
HDRoad: An encoder-decoder architecture with hybrid attention and directional prior for efficient road extraction from remote sensing images
Road extraction from very high resolution (VHR) remote sensing images (RSIs) presents significant challenges due to the varied morphology and high semantic complexity of road structures. Many existing methods struggle to consistently perform well across diverse and complex scenarios. Additionally, balancing efficiency and performance remains an unresolved issue in prior research, particularly those employing transformers. To address these challenges, we propose HDRoad, a novel encoder-decoder architecture that improves both model performance and computational efficiency, enabling training and inference on high-resolution inputs with a single GPU. The encoder, Hybrid Attention Network (HA-Net), combines dense and sparse spatial attention to effectively distill semantic information. The decoder, Directional Augmented Road Morphology Extraction Network (DARMEN), uses morphological priors to accurately refine and reconstruct road features. The model is validated on the DeepGlobe dataset and SouthernChina12k, a newly developed road segmentation dataset comprising 11,791 images from various remote sensing sources. Experimental results demonstrate that HDRoad achieves an IoU of 75.09 % on the DeepGlobe dataset and is the only model exceeding 60 % IoU on SouthernChina12k, setting new benchmarks for state-of-the-art performance in the field.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.