ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences最新文献

筛选
英文 中文
Cross Domain Early Crop Mapping with Label Spaces Discrepancies using MultiCropGAN 利用多作物基因组(MultiCropGAN)进行标签空间差异的跨域早期作物绘图
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-241-2024
Yiqun Wang, Hui Huang, Radu State
{"title":"Cross Domain Early Crop Mapping with Label Spaces Discrepancies using MultiCropGAN","authors":"Yiqun Wang, Hui Huang, Radu State","doi":"10.5194/isprs-annals-x-1-2024-241-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-241-2024","url":null,"abstract":"Abstract. Mapping target crops before the harvest season for regions lacking crop-specific ground truth is critical for global food security. Utilizing multispectral remote sensing and domain adaptation methods, prior studies strive to produce precise crop maps in these regions (target domain) with the help of the crop-specific labelled remote sensing data from the source regions (source domain). However, existing approaches assume identical label spaces across those domains, a challenge often unmet in reality, necessitating a more adaptable solution. This paper introduces the Multiple Crop Mapping Generative Adversarial Neural Network (MultiCropGAN) model, comprising a generator, discriminator, and classifier. The generator transforms target domain data into the source domain, employing identity losses to retain the characteristics of the target data. The discriminator aims to distinguish them and shares the structure and weights with the classifier, which locates crops in the target domain using the generator’s output. This model’s novel capability lies in locating target crops within the target domain, overcoming differences in crop type label spaces between the target and source domains. In experiments, MultiCropGAN is benchmarked against various baseline methods. Notably, when facing differing label spaces, MultiCropGAN significantly outperforms other baseline methods. The Overall Accuracy is improved by about 10%.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal change detection flood extraction based on self-supervised contrastive pre-training 基于自监督对比预训练的跨模态变化检测洪水提取技术
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-75-2024
Wenqing Feng, Fangli Guan, Chenhao Sun, Wei Xu
{"title":"Cross-modal change detection flood extraction based on self-supervised contrastive pre-training","authors":"Wenqing Feng, Fangli Guan, Chenhao Sun, Wei Xu","doi":"10.5194/isprs-annals-x-1-2024-75-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-75-2024","url":null,"abstract":"Abstract. Flood extraction is a critical issue in remote sensing analysis. Accurate flood extraction faces challenges such as complex scenes, image differences across modalities, and a shortage of labeled samples. Traditional supervised deep learning algorithms demonstrate promising prospects in flood extraction. They mostly rely on abundant labeled data. However, in practical applications, there is a scarcity of available labeled samples for flood change regions, leading to an expensive acquisition of such data for flood extraction. In contrast, there is a wealth of unlabeled data in remote sensing images. Self-supervised contrastive learning (SSCL) provides a solution, allowing learning from unlabeled data without explicit labels. Inspired by SSCL, we utilized the open-source CAU-Flood dataset and developed a framework for cross-modal change detection in flood extraction (CMCDFE). We employed the Barlow Twin (BT) SSCL algorithm to learn effective visual feature representations of flood change regions from unlabeled cross-modal bi-temporal remote sensing data. Subsequently, these well-initialized weight parameters were transferred to the task of flood extraction, achieving optimal accuracy. We introduced the improved CS-DeepLabV3+ network for extracting flood change regions from cross-modal bi-temporal remote sensing data, incorporating the CBAM dual attention mechanism. By demonstrating on the CAU-Flood dataset, we proved that fine-tuning with only a pre-trained encoder can surpass widely used ImageNet pre-training methods without additional data. This approach effectively addresses downstream cross-modal change detection flood extraction tasks.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Recognition and Classification of Terracotta Warriors in the Mausoleum of the First Emperor Using Deep Learning 利用深度学习对秦始皇陵兵马俑进行面部识别和分类
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-205-2024
Yan Sheng
{"title":"Facial Recognition and Classification of Terracotta Warriors in the Mausoleum of the First Emperor Using Deep Learning","authors":"Yan Sheng","doi":"10.5194/isprs-annals-x-1-2024-205-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-205-2024","url":null,"abstract":"Abstract. The facial features of the Terracotta Warriors unearthed from the Mausoleum of the First Emperor of Qin are authentic depictions of the appearance of soldiers from the same period. Recognizing facial features to classify the Terracotta Warriors is one of the crucial aspects of archaeological research. Due to limitations in the collection of facial samples from the Terracotta Warriors, an enhanced SqueezeNet model is proposed for deep learning facial recognition. The FaceNet backbone feature extraction network has been improved by replacing the initial 7×7 convolution kernel with three 3×3 convolution kernels. The model's feature extraction layer is composed of alternating convolution layers, pooling layers, Fire modules, and pooling layers, with the introduction of an exponential function to smooth the shape of the loss function. Finally, facial classification of 295 Terracotta Warriors is accomplished using Agglomerative Clustering. The model demonstrates a facial recognition accuracy of 95.6%, showing a respective improvement of 4.1% and 2.8% compared to the classical SqueezeNet and Inception_ResNetV1 models. This approach better meets the requirements for facial recognition and classification of Terracotta Warriors, providing intelligent and efficient technical support for technological archaeology.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surface Deformation Monitoring and Subsidence Mechanism Analysis in Beijing based on Time-series InSAR 基于时序 InSAR 的北京地表形变监测与沉降机制分析
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-233-2024
Jinghui Wang, Ziyan Luo, Lv Zhou, Xinyi Li, Cheng Wang, Dongming Qin
{"title":"Surface Deformation Monitoring and Subsidence Mechanism Analysis in Beijing based on Time-series InSAR","authors":"Jinghui Wang, Ziyan Luo, Lv Zhou, Xinyi Li, Cheng Wang, Dongming Qin","doi":"10.5194/isprs-annals-x-1-2024-233-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-233-2024","url":null,"abstract":"Abstract. This article uses 43 Sentinel-1A image datasets covering Beijing to obtain time series surface deformation information in the study area from January 2022 to October 2023. It analyzes the causes of land subsidence by integrating precipitation and urban construction. The main research results are as follows: (1) Three distinct subsidence areas are identified in the eastern part of Chaoyang District in Beijing (Subsidence Area A), the northwestern part of Tongzhou District in Beijing (Subsidence Area B), and Yanjiao Town in Hebei Province (Subsidence Area B), which is adjacent to Beijing. Subsidence Areas A and B exhibit a dispersed pattern, with maximum land subsidence rates exceeding −30 mm/year. In contrast, Subsidence Areas C displays a funnel-shaped subsidence, where the majority of areas experience land subsidence rates exceeding −25 mm/year, with a maximum rate reaching −45 mm/year. (2) Precipitation has a significant impact on surface deformation in the study area, and there is a strong correlation between precipitation and land subsidence, with correlation values of 0.77, 0.77, and 0.74, respectively. (3) Urban construction has a certain impact on land subsidence, but the degree of impact varies in different regions.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building Height Extraction Based on Satellite GF-7 High-Resolution Stereo Image 根据卫星 GF-7 高分辨率立体图像提取建筑物高度
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-219-2024
Zijin Tian, Yan Gong
{"title":"Building Height Extraction Based on Satellite GF-7 High-Resolution Stereo Image","authors":"Zijin Tian, Yan Gong","doi":"10.5194/isprs-annals-x-1-2024-219-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-219-2024","url":null,"abstract":"Abstract. High-resolution remote sensing images can distinguish objects of smaller size, so as to more clearly express the texture features and structural information of objects, and provide a data source for the development of large-scale mapping, high-precision stereo measurement and other fields. The purpose of this paper is to complete the height estimation of the buildings by analyzing the stereoscopic observation formed by the front and rear view images of the Gaofen-7 line array CCD. After determining the roof profile of the building on the rear-view image, assuming a series of object elevations of the building, that is, searching for elevations with a certain step distance within a certain elevation search range, adopt the object-based image matching VLL algorithm, Through the RFM imaging model of the Gaofen-7 sensor, the rear-view contour is projected to the front-view image, and then the PSNR is selected as the similarity measure of the window, and the similarity of the image block area corresponding to the front- and rear-view contour is calculated. Corresponds to the hypothetical object elevation as the estimated height of the building. Under the technical route of this paper, the height of buildings on high-resolution images can be estimated to a level within 3 meters of accuracy.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InSAR Digital Surface Model Refinement by Block Adjustment with Horizontal Constraints 通过具有水平约束条件的块调整来完善 InSAR 数字地表模型
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-259-2024
Lai Wei, Tao Ke, Quan Jing, Fanhong Li, Pengjie Tao
{"title":"InSAR Digital Surface Model Refinement by Block Adjustment with Horizontal Constraints","authors":"Lai Wei, Tao Ke, Quan Jing, Fanhong Li, Pengjie Tao","doi":"10.5194/isprs-annals-x-1-2024-259-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-259-2024","url":null,"abstract":"Abstract. Interferometric Synthetic Aperture Radar (InSAR) technology is an important method to generate digital surface model (DSM). The past studies on space-borne derived DSM most focused on the elevation correction, due to the relative low resolution of DSM product. As a large number of high-resolution satellite data emerge, the horizontal discrepancies are needed to be considered. This paper proposes a DSM block adjustment method with horizontal constraints, aimed at eliminating the horizontal errors that exist between multiple DSM scenes with overlaps, achieving high precision and consistency in both the horizontal and vertical dimensions. Using ICESat-2 ATL08 point clouds as absolute elevation control and a reference DSM for horizontal control, the adjustment equations are constructed based on the constraint of tie points and controls. The experiment selects 7 image pairs of China TH2-01 SAR satellite, corresponding ICESat-2 ATL08 point and AW3D30 as reference DEM. The block adjustment results show that the proposed method improves the absolute vertical accuracy from 3.78 m to 2.56 m and reduces the average horizontal standard deviation between the InSAR derived DSMs and the reference AW3D30 from 15.31 m to 9.08 m.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 33","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intricate multiple scattering features of artificial facilities in X-Band SAR images X 波段合成孔径雷达图像中人工设施错综复杂的多重散射特征
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-153-2024
Sijie Ma, Tao Li, Yan Liu, Jie Liu
{"title":"Intricate multiple scattering features of artificial facilities in X-Band SAR images","authors":"Sijie Ma, Tao Li, Yan Liu, Jie Liu","doi":"10.5194/isprs-annals-x-1-2024-153-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-153-2024","url":null,"abstract":"Abstract. Due to the intricate distortion and reflection geometry of the SAR signal, it is typically difficult to determine the multiple scattering of large artificial objects in SAR images. This work presents a scattering point path tracking model that utilizes the real three-dimensional dimensions of targets, based on the geometric optics method. Three different artificial structures, including light poles, cable stayed bridges, and power transmission lines, are carefully analysed in time-series SAR images with their simulated multiple scattering results. The results demonstrate that the routes determined by the model are consistent with the multiple scattering features on SAR images. Moreover, the time-series data demonstrate that ripples in the water's surface have a significant impact on the multi-scattering features of power lines and bridges. The double scattering features of the light pole provides a novel approach to the process of permanent scatterers (PS) in urban areas. The instances presented in this study demonstrate the effectiveness of the scattering point path tracking model in identifying the various artificial facility targets on different reflective surfaces. It will be a useful tool for deciphering the multiple scattering of large artificial structures when their 3D model is known.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy Assessment of UAV Photogrammetry System with RTK Measurements for Direct Georeferencing 利用 RTK 测量直接进行地理参照的无人机摄影测量系统的精度评估
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-169-2024
Zhuangqun Niu, Hui Xia, Pengjie Tao, Tao Ke
{"title":"Accuracy Assessment of UAV Photogrammetry System with RTK Measurements for Direct Georeferencing","authors":"Zhuangqun Niu, Hui Xia, Pengjie Tao, Tao Ke","doi":"10.5194/isprs-annals-x-1-2024-169-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-169-2024","url":null,"abstract":"Abstract. The direct georeferencing accuracy of unmanned aerial vehicle (UAV) images with real-time kinematic (RTK) measurements is a concerned topic in the community of photogrammetry. This study assesses the capabilities of a multi-rotor platform equipped with RTK technology, specifically a DJI Phantom 4 RTK UAV, for robust direct georeferencing. The UAV surveyed a square and a building at Wuhan University to assess the accuracy and spatial consistency of direct georeferencing in close-range photography. We tested checkpoint errors under various ground control points (GCPs) configurations. The results show that without GCP, an analysis of 71 spatially distributed checkpoints produced a root mean square error (RMSE) of 5.58 cm in the Z direction. This finding indicates that RTK-equipped UAVs can achieve acceptable error margins even without using GCPs, thereby fulfilling the precision requirements for large-scale mapping.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 48","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Feature Matching for Large-scale Images based on Cascade Hash and Local Geometric Constraint 基于级联哈希和局部几何约束的大规模图像高效特征匹配
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-289-2024
Kan You, San Jiang, Yaxin Li, Wanshou Jiang, Xiangxiang Huang
{"title":"Efficient Feature Matching for Large-scale Images based on Cascade Hash and Local Geometric Constraint","authors":"Kan You, San Jiang, Yaxin Li, Wanshou Jiang, Xiangxiang Huang","doi":"10.5194/isprs-annals-x-1-2024-289-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-289-2024","url":null,"abstract":"Abstract. Feature matching plays a crucial role in 3D reconstruction to provide correspondences between overlapped images. The accuracy and efficiency of feature matching significantly impact the performance of 3D reconstruction. The widely used framework with the exhaustive nearest neighbor searching (NNS) between descriptors and RANSAC-based geometric estimation is, however, low-efficient and unreliable for large-scale UAV images. Inspired by indexing-based NNS, this paper implements an efficient feature matching method for large-scale images based on Cascade Hashing and local geometric constraints. Our proposed method improves upon traditional feature matching approaches by introducing a combination of image retrieval, data scheduling, and GPU-accelerated Cascade Hashing. Besides, it utilizes a local geometric constraint to filter matching results within a matching framework. On the one hand, the GPU-accelerated Cascade Hashing technique generates compact and discriminative hash codes based on image features, facilitating the rapid completion of the initial matching process, and significantly reducing the search space and time complexity. On the other hand, after the initial matching is completed, the method employs a local geometric constraint to filter the initial matching results, enhancing the accuracy of the matching results. This forms a three-tier framework based on data scheduling, GPU-accelerated Cascade Hashing, and local geometric constraints. We conducted experiments using two sets of large-scale UAV image data, comparing our method with SIFTGPU to evaluate its performance in initial matching, outlier rejection, and 3D reconstruction. The results demonstrate that our method achieves a feature matching speed 2.0 times that of SIFTGPU while maintaining matching accuracy and producing comparable reconstruction results. This suggests that our method holds promise for efficiently addressing large-scale image matching.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RANSAC-Based Planar Point Cloud Segmentation Enhanced by Normal Vector and Maximum Principal Curvature Clustering 通过法向量和最大主曲率聚类增强基于 RANSAC 的平面点云分割功能
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-145-2024
Yibo Ling, Yuli Wang, Ting On Chan
{"title":"RANSAC-Based Planar Point Cloud Segmentation Enhanced by Normal Vector and Maximum Principal Curvature Clustering","authors":"Yibo Ling, Yuli Wang, Ting On Chan","doi":"10.5194/isprs-annals-x-1-2024-145-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-145-2024","url":null,"abstract":"Abstract. Planar feature segmentation is an essential task for 3D point cloud processing, finding many applications in various fields such as robotics and computer vision. The Random Sample Consensus (RANSAC) is one of the most common algorithms for the segmentation, but its performance, as given by the original form, is usually limited due to the use of a single threshold and interruption of similar planar features presented close to each other. To address these issues, we present a novel point cloud processing workflow which aims at developing an initial segmentation stage before the basic RANSAC is performed. Initially, normal vectors and maximum principal curvatures for each point of a given point cloud are analyzed and integrated. Subsequently, a subset of normal vectors and curvature is utilized to cluster planes with similar geometry based on the region growing algorithm, serving as a coarse but fast segmentation process. The segmentation is therefore refined with the RANSAC algorithm which can be now performed with higher accuracy and speed due to the reduced interference. After the RANSAC process is applied, resultant planar point clouds are built from the sparse ones via a point aggregation process based on geometric constraints. Four datasets (three real and one simulated) were used to verify the method. Compared to the classic segmentation method, our method achieves higher accuracy, with an RMSE from fitting equal to 0.0521 m, along with a higher recall of 93.31% and a higher F1-score of 95.38%.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信