Kirsi Karila, Leena Matikainen, Mika Karjalainen, Eetu Puttonen, Yuwei Chen, Juha Hyyppä
{"title":"Automatic labelling for semantic segmentation of VHR satellite images: Application of airborne laser scanner data and object-based image analysis","authors":"Kirsi Karila, Leena Matikainen, Mika Karjalainen, Eetu Puttonen, Yuwei Chen, Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100046","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100046","url":null,"abstract":"<div><p>The application of deep learning methods to remote sensing data has produced good results in recent studies. A promising application area is automatic land cover classification (semantic segmentation) from very high-resolution satellite imagery. However, the deep learning methods require large, labelled training datasets that are suitable for the study area. Map data can be used as training data, but it is often insufficiently detailed for very high-resolution satellite imagery. National airborne laser scanner (lidar) datasets provide additional details and are available in many countries. Successful land cover classifications from lidar datasets have been reached, e.g., by object-based image analysis. In the present study, we investigated the feasibility of using airborne laser scanner data and object-based image analysis to automatically generate labelled training data for a deep neural network -based land cover classification of a VHR satellite image. Input data for the object-based classification included digital surface models, intensity and pulse information derived from the lidar data. The resulting land cover classification was then utilized as training data for deep learning. A state-of-the-art deep learning architecture, UnetFormer, was trained and applied to the land cover classification of a WorldView-3 stereo dataset. For the semantic segmentation, three different input data composites were produced using the red, green, blue, NIR and digital surface model bands derived from the satellite data. The quality of the generated training data and the semantic segmentation results was estimated using an independent test set of ground truth points. The results show that final satellite image classification accuracy (94–96%) close to the training data accuracy (97%) was obtained. It was also demonstrated that the resulting maps could be used for land cover change detection.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100046"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49738084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aada Hakula , Lassi Ruoppa , Matti Lehtomäki , Xiaowei Yu , Antero Kukko , Harri Kaartinen , Josef Taher , Leena Matikainen , Eric Hyyppä , Ville Luoma , Markus Holopainen , Ville Kankare , Juha Hyyppä
{"title":"Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data","authors":"Aada Hakula , Lassi Ruoppa , Matti Lehtomäki , Xiaowei Yu , Antero Kukko , Harri Kaartinen , Josef Taher , Leena Matikainen , Eric Hyyppä , Ville Luoma , Markus Holopainen , Ville Kankare , Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100039","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100039","url":null,"abstract":"<div><p>Tree species characterise biodiversity, health, economic potential, and resilience of an ecosystem, for example. Tree species classification based on remote sensing data, however, is known to be a challenging task. In this paper, we study for the first time the feasibility of tree species classification using high-density point clouds collected with an airborne close-range multispectral laser scanning system – a technique that has previously proved to be capable of providing stem curve and volume accurately and rapidly for standing trees. To this end, we carried out laser scanning measurements from a helicopter on 53 forest sample plots, each with a size of 32 m × 32 m. The plots covered approximately 5500 trees in total, including Scots pine (<em>Pinus sylvestris</em> L.), Norway spruce (<em>Picea abies</em> (L.) H.Karst.), and deciduous trees such as Downy birch (<em>Betula pubescens</em> Ehrh.) and Silverbirch (<em>Betula pendula</em> Roth). The multispectral laser scanning system consisted of integrated Riegl VUX-1HA, miniVUX-3UAV, and VQ-840-G scanners (Riegl GmbH, Austria) operating at wavelengths of 1550 nm, 905 nm, and 532 nm, respectively. A new approach, <em>layer-by-layer segmentation</em>, was developed for individual tree detection and segmentation from the dense point cloud data. After individual tree segmentation, 249 features were computed for tree species classification, which was tested with approximately 3000 trees. The features described the point cloud geometry as well as single-channel and multi-channel reflectance metrics. Both feature selection and the tree species classification were conducted using the random forest method. Using the layer-by-layer segmentation algorithm, trees in the dominant and co-dominant categories were found with detection rates of 89.5% and 77.9%, respectively, whereas suppressed trees were detected with a success rate of 15.2%–42.3%, clearly improving upon the standard watershed segmentation. The overall accuracy of the tree species classification was 73.1% when using geometric features from the 1550 nm scanner data and 86.6% when combining the geometric features with reflectance information of the 1550 nm data. The use of multispectral reflectance and geometric features improved the overall classification accuracy up to 90.8%. Classification accuracies were as high as 92.7% and 93.7% for dominant and co-dominant trees, respectively.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated pipeline reconstruction using deep learning & instance segmentation","authors":"Lukas Hart , Stefan Knoblach , Michael Möser","doi":"10.1016/j.ophoto.2023.100043","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100043","url":null,"abstract":"<div><p>BIM is a powerful tool for the construction industry as well as for various other industries, so that its use has increased massively in recent years. Laser scanners are usually used for the measurement, which, in addition to the high acquisition costs, also cause problems on reflective surfaces. The use of photogrammetric techniques for BIM in industrial plants, on the other hand, is less widespread and less automated. CAD software (for point cloud evaluation) contains at best automated reconstruction algorithms for pipes. Fittings, flanges or elbows require a manual reconstruction. We present a method for automated processing of photogrammetric images for modeling pipelines in industrial plants. For this purpose we use instance segmentation and reconstruct the components of the pipeline directly based on the edges of the segmented objects in the images. Hardware costs can be kept low by using photogrammetry instead of laser scanning. Besides the autmatic extraction and reconstruction of pipes, we have also implemented this for elbows and flanges. For object recognition, we fine-tuned different instance segmentation models using our own training data, while also testing various data augmentation techniques. The average precision varies depending on the object type. The best results were achieved with Mask R–CNN. Here, the average precision was about 40%. The results of the automated reconstruction were examined with regard to the accuracy on a test object in the laboratory. The deviations from the reference geometry were in the range of a few millimeters and were comparable to manual reconstruction. In addition, further tests were carried out with images from a plant. Provided that the objects were correctly and completely recognized, a satisfactory reconstruction is possible with the help of our method.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49726180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iris de Gélis , Sudipan Saha , Muhammad Shahzad , Thomas Corpetti , Sébastien Lefèvre , Xiao Xiang Zhu
{"title":"Deep unsupervised learning for 3D ALS point clouds change detection","authors":"Iris de Gélis , Sudipan Saha , Muhammad Shahzad , Thomas Corpetti , Sébastien Lefèvre , Xiao Xiang Zhu","doi":"10.1016/j.ophoto.2023.100044","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100044","url":null,"abstract":"<div><p>Change detection from traditional 2D optical images has limited capability to model the changes in the height or shape of objects. Change detection using 3D point cloud from photogrammetry or LiDAR surveying can fill this gap by providing critical depth information. While most existing machine learning based 3D point cloud change detection methods are supervised, they severely depend on the availability of annotated training data, which is in practice a critical point. To circumnavigate this dependence, we propose an unsupervised 3D point cloud change detection method mainly based on self-supervised learning using deep clustering and contrastive learning. The proposed method also relies on an adaptation of deep change vector analysis to 3D point cloud via nearest point comparison. Experiments conducted on an aerial LiDAR survey dataset show that the proposed method obtains higher performance in comparison to the traditional unsupervised methods, with a gain of about 9% in mean accuracy (to reach more than 85%). Thus, it appears to be a relevant choice in scenario where prior knowledge (labels) is not ensured. The code will be made available at <span>https://github.com/IdeGelis/torch-points3d-SSL-DCVA</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49726182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Straker , Stefano Puliti , Johannes Breidenbach , Christoph Kleinn , Grant Pearse , Rasmus Astrup , Paul Magdon
{"title":"Instance segmentation of individual tree crowns with YOLOv5: A comparison of approaches using the ForInstance benchmark LiDAR dataset","authors":"Adrian Straker , Stefano Puliti , Johannes Breidenbach , Christoph Kleinn , Grant Pearse , Rasmus Astrup , Paul Magdon","doi":"10.1016/j.ophoto.2023.100045","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100045","url":null,"abstract":"<div><p>Fine-grained information on the level of individual trees constitute key components for forest observation enabling forest management practices tackling the effects of climate change and the loss of biodiversity in forest ecosystems. Such information on individual tree crowns (ITC's) can be derived from the application of ITC segmentation approaches, which utilize remotely sensed data. However, many ITC segmentation approaches require prior knowledge about forest characteristics, which is difficult to obtain for parameterization. This can be avoided by the adoption of data-driven, automated workflows based on convolutional neural networks (CNN). To contribute to the advancements of efficient ITC segmentation approaches, we present a novel ITC segmentation approach based on the YOLOv5 CNN. We analyzed the performance of this approach on a comprehensive international unmanned aerial laser scanning (UAV-LS) dataset (ForInstance), which covers a wide range of forest types. The ForInstance dataset consists of 4192 individually annotated trees in high-density point clouds with point densities ranging from 498 to 9529 points m-2 collected across 80 sites. The original dataset was split into 70% for training and validation and 30% for model performance assessment (test data). For the best performing model, we observed a F1-score of 0.74 for ITC segmentation and a tree detection rate (DET %) of 64% in the test data. This model outperformed an ITC segmentation approach, which requires prior knowledge about forest characteristics, by 41% and 33% for F1-score and DET %, respectively. Furthermore, we tested the effects of reduced point densities (498, 50 and 10 points per m-2) on ITC segmentation performance. The YOLO model exhibited promising F1-scores of 0.69 and 0.62 even at point densities of 50 and 10 points m-2, respectively, which were between 27% and 34% better than the ITC approach that requires prior knowledge.</p><p>Furthermore, the areas of ITC segments resulting from the application of the best performing YOLO model were close to the reference areas (RMSE = 3.19 m-2), suggesting that the YOLO-derived ITC segments can be used to derive information on ITC level.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100045"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49726432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gattu Priyanka , Sunita Choudhary , Krithika Anbazhagan , Dharavath Naresh , Rekha Baddam , Jan Jarolimek , Yogesh Parnandi , P. Rajalakshmi , Jana Kholova
{"title":"A step towards inter-operable Unmanned Aerial Vehicles (UAV) based phenotyping; A case study demonstrating a rapid, quantitative approach to standardize image acquisition and check quality of acquired images","authors":"Gattu Priyanka , Sunita Choudhary , Krithika Anbazhagan , Dharavath Naresh , Rekha Baddam , Jan Jarolimek , Yogesh Parnandi , P. Rajalakshmi , Jana Kholova","doi":"10.1016/j.ophoto.2023.100042","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100042","url":null,"abstract":"<div><p>The Unmanned aerial vehicles (UAVs) - based imaging is being intensively explored for precise crop evaluation. Various optical sensors, such as RGB, multi-spectral, and hyper-spectral cameras, can be used for this purpose. Consistent image quality is crucial for accurate plant trait prediction (i.e., phenotyping). However, achieving consistent image quality can pose a challenge as image qualities can be affected by i) UAV and camera technical settings, ii) environment, and iii) crop and field characters which are not always under the direct control of the UAV operator. Therefore, capturing the images requires the establishment of robust protocols to acquire images of suitable quality, and there is a lack of systematic studies on this topic in the public domain. Therefore, in this case study, we present an approach (protocols, tools, and analytics) that addressed this particular gap in our specific context. In our case, we had the drone (DJI Inspire 1 Raw) available, equipped with RGB camera (DJI Zenmuse x5), which needed to be standardized for phenotyping of the annual crops’ canopy cover (CC). To achieve this, we have taken 69 flights in Hyderabad, India, on 5 different cereal and legume crops (<span><math><mo>∼</mo><mn>300</mn></math></span> genotypes) in different vegetative growth stages with different combinations of technical setups of UAV and camera and across the environmental conditions typical for that region. For each crop-genotype combination, the ground truth (for CC) was rapidly estimated using an automated phenomic platform (LeasyScan phenomics platform, ICRISAT). This data-set enabled us to 1) quantify the sensitivity of image acquisition to the main technical, environmental and crop-related factors and this analysis was then used to develop the image acquisition protocols specific to our UAV-camera system. This process was significantly eased by automated ground-truth collection. We also 2) identified the important image quality indicators that integrated the effects of 1) and these indicators were used to develop the quality control protocols for inspecting the images post accquisition. To ease 2), we present a web-based application available at (<span>https://github.com/GattuPriyanka/Framework-for-UAV-image-quality.git</span><svg><path></path></svg>) which automatically calculates these key image quality indicators.</p><p>Overall, we present a methodology for establishing the image acquisition protocol and quality check for obtained images, enabling a high accuracy of plant trait inference. This methodology was demonstrated on a particular UAV-camera set-up and focused on a specific crop trait (CC) at the ICRISAT research station (Hyderabad, India). We envision that, in the future, a similar image quality control system could facilitate the interoperability of data from various UAV-imaging set-ups.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100042"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olli Winberg , Jiri Pyörälä , Xiaowei Yu , Harri Kaartinen , Antero Kukko , Markus Holopainen , Johan Holmgren , Matti Lehtomäki , Juha Hyyppä
{"title":"Branch information extraction from Norway spruce using handheld laser scanning point clouds in Nordic forests","authors":"Olli Winberg , Jiri Pyörälä , Xiaowei Yu , Harri Kaartinen , Antero Kukko , Markus Holopainen , Johan Holmgren , Matti Lehtomäki , Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100040","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100040","url":null,"abstract":"<div><p>We showed that a mobile handheld laser scanner (HHLS) provides useful features concerning the wood quality-influencing external structures of trees. When linked with wood properties measured at a sawmill utilizing state-of-the-art X-ray scanners, these data enable the training of various wood quality models for use in targeting and planning future wood procurement. A total of 457 Norway spruce sample trees (<em>Picea abies</em> (L.) H. Karst.) from 13 spruce-dominated stands in southeastern Finland were used in the study. All test sites were recorded with a ZEB Horizon HHLS, and the sample trees were tracked to a sawmill and subjected to X-rays. Two branch extraction techniques were applied to the HHLS point clouds: 1) a method developed in this study that was based on the density-based spatial clustering of applications with noise (DBSCAN) and 2) segmentation-based quantitative structure model (treeQSM). On average, the treeQSM method detected 46% more branches per tree than the DBSCAN did. However, compared with the X-rayed references, some of the branches detected by the treeQSM may either be false positives or so small in size that the X-rays are unable to detect them as knots, as the method overestimated the whorl count by 19% when compared with the X-rays. On the other hand, the DBSCAN method only detected larger branches and showed a −11% bias in whorl count. Overall, the DBSCAN underestimated knot volumes within trees by 6%, while the treeQSM overestimated them by 25%. When we input the HHLS features into a Random Forest model, the knottiness variables measured at the sawmill were predicted with R<sup>2</sup>s of 0.47–0.64. The results were comparable with previous results derived with the static terrestrial laser scanners. The obtained stem branching data are relevant for predicting wood quality attributes but do not provide data that are directly comparable with the X-ray features. Future work should combine terrestrial point clouds with dense above-canopy point clouds to overcome the limitations related to vertical coverage.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melanie Elias , Alexandra Weitkamp , Anette Eltner
{"title":"Multi-modal image matching to colorize a SLAM based point cloud with arbitrary data from a thermal camera","authors":"Melanie Elias , Alexandra Weitkamp , Anette Eltner","doi":"10.1016/j.ophoto.2023.100041","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100041","url":null,"abstract":"<div><p>Thermal mapping of buildings can be one approach to assess the insulation, which is important in regard to upgrade buildings to increase energy efficiency and for climate change adaptation. Personal laser scanning (PLS) is a fast and flexible option that has become increasingly popular to efficiently map building facades. However, some measurement systems do not include sufficient colorization of the point cloud. In order to detect, map and reference any damages to building facades, it is of great interest to transfer images from RGB and thermal infrared (TIR) cameras to the point cloud. This study aims to answer the research question if a flexible tool can be developed, which enable such measurements with high spatial resolution and flexibility. Therefore, an image-to-geometry registration approach for rendered point clouds is combined with a deep learning (DL)-based image feature matcher to estimate the camera pose of arbitrary images in relation to the geometry, i.e. the point cloud, to map color information. We developed a research design for multi-modal image matching to investigate the alignment of RGB and TIR camera images to a PLS point cloud with intensity information using calibrated and un-calibrated images. The accuracies of the estimated pose parameters reveal the best performance of the registration for pre-calibrated, i.e. undistorted, RGB camera images. The alignment of un-calibrated RGB and TIR camera images to a point cloud is possible if sufficient and well-distributed 2D-3D feature matches between image and point cloud are available. Our workflow enables the colorization of point clouds with high accuracy using images with very different radiometric characteristics and image resolutions. Only a rough approximation of the camera pose is required and hence the approach reliefs strict sensor synchronization requirements.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms","authors":"Ningli Xu , Rongjun Qin Ph.D. , Shuang Song","doi":"10.1016/j.ophoto.2023.100032","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100032","url":null,"abstract":"<div><p>Three-dimensional (3D) point cloud registration is a fundamental step for many 3D modeling and mapping applications. Existing approaches are highly disparate in the data source, scene complexity, and application, therefore the current practices in various point cloud registration tasks are still ad-hoc processes. Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100032"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precision estimation of 3D objects using an observation distribution model in support of terrestrial laser scanner network design","authors":"D.D. Lichti , T.O. Chan , Kate Pexman","doi":"10.1016/j.ophoto.2023.100035","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100035","url":null,"abstract":"<div><p>First order geometric network design is an important quality assurance process for terrestrial laser scanning of complex built environments for the construction of digital as-built models. A key design task is the determination of a set of instrument locations or viewpoints that provide complete site coverage while meeting quality criteria. Although simplified point precision measures are often used in this regard, precision measures for common geometric objects found in the built environment—planes, cylinders and spheres—are arguably more relevant indicators of as-built model quality. The computation of such measures at the design stage—which is not currently done—requires generation of artificial observations by ray casting, which can be a dissuasive factor for their adoption. This paper presents models for the rigorous computation of geometric object precision without the need for ray casting. Instead, a model for the 2D distribution of angular observations is coupled with candidate viewpoint-object geometry to derive the covariance matrix of parameters. Three-dimensional models are developed and tested for vertical cylinders, spheres and vertical, horizontal and tilted planes. Precision estimates from real experimental data were used as the reference for assessing the accuracy of the predicted precision—specifically the standard deviation—of the parameters of these objects. Results show that the mean accuracy of the model-predicted precision was 4.3% (of the read data value) or better for the planes, regardless of plane orientation. The mean accuracy of the cylinders was up to 6.2%. Larger differences were found for some datasets due to incomplete object coverage with the reference data, not due to the model. Mean precision for the spheres was similar, up to 6.1%, following adoption of a new model for deriving the angular scanning limits. The computational advantage of the proposed method over precision estimates from simulated, high-resolution point clouds is also demonstrated. The CPU time required to estimate precision can be reduced by up to three orders of magnitude. These results demonstrate the utility of the derived models for efficiently determining object precision in 3D network design in support of scanning surveys for reality capture.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}