{"title":"条带网:一种从遥感图像中提取密集道路的方法","authors":"Xianzhi Ma;Xiaokai Zhang;Daoxiang Zhou;Zehua Chen","doi":"10.1109/TIV.2024.3393508","DOIUrl":null,"url":null,"abstract":"Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7097-7109"},"PeriodicalIF":14.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"StripUnet: A Method for Dense Road Extraction From Remote Sensing Images\",\"authors\":\"Xianzhi Ma;Xiaokai Zhang;Daoxiang Zhou;Zehua Chen\",\"doi\":\"10.1109/TIV.2024.3393508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.\",\"PeriodicalId\":36532,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Vehicles\",\"volume\":\"9 11\",\"pages\":\"7097-7109\"},\"PeriodicalIF\":14.0000,\"publicationDate\":\"2024-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Vehicles\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10508493/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10508493/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
StripUnet: A Method for Dense Road Extraction From Remote Sensing Images
Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.
期刊介绍:
The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges.
Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.