条带网:一种从遥感图像中提取密集道路的方法

IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xianzhi Ma;Xiaokai Zhang;Daoxiang Zhou;Zehua Chen
{"title":"条带网:一种从遥感图像中提取密集道路的方法","authors":"Xianzhi Ma;Xiaokai Zhang;Daoxiang Zhou;Zehua Chen","doi":"10.1109/TIV.2024.3393508","DOIUrl":null,"url":null,"abstract":"Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 11","pages":"7097-7109"},"PeriodicalIF":14.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"StripUnet: A Method for Dense Road Extraction From Remote Sensing Images\",\"authors\":\"Xianzhi Ma;Xiaokai Zhang;Daoxiang Zhou;Zehua Chen\",\"doi\":\"10.1109/TIV.2024.3393508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.\",\"PeriodicalId\":36532,\"journal\":{\"name\":\"IEEE Transactions on Intelligent Vehicles\",\"volume\":\"9 11\",\"pages\":\"7097-7109\"},\"PeriodicalIF\":14.0000,\"publicationDate\":\"2024-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Intelligent Vehicles\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10508493/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10508493/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

从高分辨率遥感影像中提取道路信息,可以为城乡规划、交通控制和环境保护等领域的应用提供重要的数据支持。然而,在复杂背景下,许多遥感图像中的道路分布非常密集,道路信息所占比例非常小,严重影响了提取的路网结构的完整性和连通性。为了解决这一问题,我们提出了一种基于StripUnet的遥感图像密集道路提取方法。设计的条形注意学习模块(SALM)使模型能够专注于条形道路;设计的多尺度特征融合模块(MSFF)用于从深度特征图中提取全局信息和上下文信息;所设计的条带特征增强模块(SFEM)对通过跳过连接传输的特征图中的条带特征进行增强;设计的多尺度蛇形解码器(MSSD)利用动态蛇形卷积帮助模型更好地重建道路。设计的模型在DeepGlobe和Massachusetts公共数据集上进行了测试,F1得分分别为83.75%和80.65%,IoUs得分分别为73.04%和67.96%。与最新型相比,F1分数提高了1.07%和1.11%,iou分别提高了1.28%和1.07%。实验表明,StripUnet在密集道路网络提取中是非常有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
StripUnet: A Method for Dense Road Extraction From Remote Sensing Images
Road extraction from high-resolution remote sensing images can provide vital data support for applications in urban and rural planning, traffic control, and environmental protection. However, roads in many remote sensing images are densely distributed with a very small proportion of road information against a complex background, significantly impacting the integrity and connectivity of the extracted road network structure. To address this issue, we propose a method named StripUnet for dense road extraction from remote sensing images. The designed Strip Attention Learning Module (SALM) enables the model to focus on strip-shaped roads; the designed Multi-Scale Feature Fusion Module (MSFF) is used for extracting global and contextual information from deep feature maps; the designed Strip Feature Enhancement Module (SFEM) enhances the strip features in feature maps transmitted through skip connections; and the designed Multi-Scale Snake Decoder (MSSD) utilizes dynamic snake convolution to aid the model in better reconstructing roads. The designed model is tested on the public datasets DeepGlobe and Massachusetts, achieving F1 scores of 83.75% and 80.65%, and IoUs of 73.04% and 67.96%, respectively. Compared to the latest state-of-the-art models, F1 scores improve by 1.07% and 1.11%, and IoUs increase by 1.28% and 1.07%, respectively. Experiments demonstrate that StripUnet is highly effective in dense road network extraction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Intelligent Vehicles
IEEE Transactions on Intelligent Vehicles Mathematics-Control and Optimization
CiteScore
12.10
自引率
13.40%
发文量
177
期刊介绍: The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges. Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信