走向自动化和全面的步行性审计与街景图像:利用虚拟现实增强语义分割

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Keundeok Park , Donghwan Ki , Sugie Lee
{"title":"走向自动化和全面的步行性审计与街景图像:利用虚拟现实增强语义分割","authors":"Keundeok Park ,&nbsp;Donghwan Ki ,&nbsp;Sugie Lee","doi":"10.1016/j.isprsjprs.2025.02.015","DOIUrl":null,"url":null,"abstract":"<div><div>Street view images (SVIs) coupled with computer vision (CV) techniques have become powerful tools in the planning and related fields for measuring the built environment. However, this methodology is often challenging to be implemented due to challenges in capturing a comprehensive set of planning-relevant environmental attributes and ensuring adequate accuracy. The shortcomings arise primarily from the annotation policies of the existing benchmark datasets used to train CV models, which are not specifically tailored to fit urban planning needs. For example, CV models trained on these existing datasets can only capture a very limited subset of the environmental features included in walkability audit tools. To address this gap, this study develops a virtual reality (VR) based benchmark dataset specifically tailored for measuring walkability with CV models. Our aim is to demonstrate that combining VR-based data with the real-world dataset (i.e., ADE20K) improves performance in automated walkability audits. Specifically, we investigate whether VR-based data enables CV models to audit a broader range of walkability-related objects (i.e., comprehensiveness) and to assess objects with enhanced accuracy (i.e., accuracy). In result, the integrated model achieves a pixel accuracy (PA) of 0.964 and an intersection-over-union (IoU) of 0.679, compared to a pixel accuracy of 0.959 and an IoU of 0.605 for the real-only model. Additionally, a model trained solely on virtual data, incorporating classes absent from the original dataset (i.e., bollards), attains a PA of 0.979 and an IoU of 0.676. These findings allow planners to adapt CV and SVI techniques for more planning-relevant purposes, such as accurately and comprehensively measuring walkability.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"223 ","pages":"Pages 78-90"},"PeriodicalIF":10.6000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward Automated and Comprehensive Walkability Audits with Street View Images: Leveraging Virtual Reality for Enhanced Semantic Segmentation\",\"authors\":\"Keundeok Park ,&nbsp;Donghwan Ki ,&nbsp;Sugie Lee\",\"doi\":\"10.1016/j.isprsjprs.2025.02.015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Street view images (SVIs) coupled with computer vision (CV) techniques have become powerful tools in the planning and related fields for measuring the built environment. However, this methodology is often challenging to be implemented due to challenges in capturing a comprehensive set of planning-relevant environmental attributes and ensuring adequate accuracy. The shortcomings arise primarily from the annotation policies of the existing benchmark datasets used to train CV models, which are not specifically tailored to fit urban planning needs. For example, CV models trained on these existing datasets can only capture a very limited subset of the environmental features included in walkability audit tools. To address this gap, this study develops a virtual reality (VR) based benchmark dataset specifically tailored for measuring walkability with CV models. Our aim is to demonstrate that combining VR-based data with the real-world dataset (i.e., ADE20K) improves performance in automated walkability audits. Specifically, we investigate whether VR-based data enables CV models to audit a broader range of walkability-related objects (i.e., comprehensiveness) and to assess objects with enhanced accuracy (i.e., accuracy). In result, the integrated model achieves a pixel accuracy (PA) of 0.964 and an intersection-over-union (IoU) of 0.679, compared to a pixel accuracy of 0.959 and an IoU of 0.605 for the real-only model. Additionally, a model trained solely on virtual data, incorporating classes absent from the original dataset (i.e., bollards), attains a PA of 0.979 and an IoU of 0.676. These findings allow planners to adapt CV and SVI techniques for more planning-relevant purposes, such as accurately and comprehensively measuring walkability.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"223 \",\"pages\":\"Pages 78-90\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2025-03-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271625000656\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625000656","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

街景图像(SVIs)与计算机视觉(CV)技术相结合,已成为规划和相关领域测量建筑环境的有力工具。然而,由于在获取一套全面的与规划有关的环境属性和确保足够的准确性方面的挑战,这种方法在实施时往往具有挑战性。缺点主要来自用于训练CV模型的现有基准数据集的注释策略,这些策略没有专门针对城市规划需求进行定制。例如,在这些现有数据集上训练的CV模型只能捕获可步行性审计工具中包含的非常有限的环境特征子集。为了解决这一差距,本研究开发了一个基于虚拟现实(VR)的基准数据集,专门用于使用CV模型测量步行性。我们的目标是证明将基于vr的数据与现实世界数据集(即ADE20K)相结合可以提高自动步行性审计的性能。具体而言,我们研究了基于vr的数据是否使CV模型能够审计更广泛的与步行性相关的对象(即全面性),并以更高的准确性(即准确性)评估对象。结果表明,综合模型的像元精度(PA)为0.964,交叉超并度(IoU)为0.679,而纯实数模型的像元精度为0.959,IoU为0.605。此外,仅在虚拟数据上训练的模型,包含原始数据集中没有的类(即,护柱),获得了0.979的PA和0.676的IoU。这些发现允许规划者将CV和SVI技术应用于更多与规划相关的目的,例如准确和全面地测量步行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Toward Automated and Comprehensive Walkability Audits with Street View Images: Leveraging Virtual Reality for Enhanced Semantic Segmentation
Street view images (SVIs) coupled with computer vision (CV) techniques have become powerful tools in the planning and related fields for measuring the built environment. However, this methodology is often challenging to be implemented due to challenges in capturing a comprehensive set of planning-relevant environmental attributes and ensuring adequate accuracy. The shortcomings arise primarily from the annotation policies of the existing benchmark datasets used to train CV models, which are not specifically tailored to fit urban planning needs. For example, CV models trained on these existing datasets can only capture a very limited subset of the environmental features included in walkability audit tools. To address this gap, this study develops a virtual reality (VR) based benchmark dataset specifically tailored for measuring walkability with CV models. Our aim is to demonstrate that combining VR-based data with the real-world dataset (i.e., ADE20K) improves performance in automated walkability audits. Specifically, we investigate whether VR-based data enables CV models to audit a broader range of walkability-related objects (i.e., comprehensiveness) and to assess objects with enhanced accuracy (i.e., accuracy). In result, the integrated model achieves a pixel accuracy (PA) of 0.964 and an intersection-over-union (IoU) of 0.679, compared to a pixel accuracy of 0.959 and an IoU of 0.605 for the real-only model. Additionally, a model trained solely on virtual data, incorporating classes absent from the original dataset (i.e., bollards), attains a PA of 0.979 and an IoU of 0.676. These findings allow planners to adapt CV and SVI techniques for more planning-relevant purposes, such as accurately and comprehensively measuring walkability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信