Building extraction from oblique photogrammetry point clouds based on PointNet++ with attention mechanism

Hong Hu, Qing Tan, Ruihong Kang, Yanlan Wu, Hui Liu, Baoguo Wang
{"title":"Building extraction from oblique photogrammetry point clouds based on PointNet++ with attention mechanism","authors":"Hong Hu, Qing Tan, Ruihong Kang, Yanlan Wu, Hui Liu, Baoguo Wang","doi":"10.1111/phor.12476","DOIUrl":null,"url":null,"abstract":"Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Photogrammetric Record","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/phor.12476","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.

Abstract Image

基于带有关注机制的 PointNet++ 从倾斜摄影测量点云中提取建筑物
无人飞行器(UAV)可捕捉室外场景中的斜向点云,其中包含大量建筑物信息。从图像中提取建筑物特征会受到观测点、光照、遮挡、噪声和图像条件的影响,这使得建筑物特征难以提取。目前,地面高程变化可以为提取提供有力的帮助,而点云数据可以精确反映这些信息。因此,斜摄影测量点云具有重要的研究意义。传统的建筑物提取方法需要对原始数据进行过滤和排序以分离建筑物,这会导致点云丢失空间信息,降低建筑物提取精度。因此,我们开发了一种基于深度学习的智能建筑物提取方法,在 PointNet++ 网络的集合抽象层中的 Samling 和 PointNet 操作中加入了注意力机制模块。为了评估我们的方法的有效性,我们从中国蚌埠市五个区域的无人机斜向点云创建的数据集中训练和提取建筑物。该方法取得了令人印象深刻的性能指标,包括 95.7% 的交集大于联合、96.5% 的准确率、96.5% 的精确率、98.7% 的召回率和 97.8% 的 F1 分数。由于加入了注意力机制,模型的整体训练准确率提高了约 3%。该方法展示了提高数字城市化建设项目的准确性和效率的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信