{"title":"SELECTED QUALITATIVE ASPECTS OF LIDAR POINT CLOUDS: GEOSLAM ZEB-REVO AND FARO FOCUS 3D X130","authors":"A. Warchoł, T. Karaś, M. Antoń","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-205-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-205-2023","url":null,"abstract":"Abstract. This paper presents a comparison of LiDAR point clouds acquired using two, different measurement techniques: static TLS (Terrestrial Laser Scanning) performed with a FARO Focus3D X130 laser scanner and a SLAM-based (Simultaneous Localization and Mapping) unit of MLS (Mobile Laser Scanning), namely GeoSLAM ZEB-REVO. After the two point clouds were brought into a single coordinate system, they were compared with each other in terms of internal accuracy and density. The density aspect was visualized using 2D density rasters, and calculated using 3 methods available in CloudCompare software. Thus, one should consider before choosing how to acquire a LiDAR point cloud whether a short measurement time is more important (ZEB-REVO) or whether higher density and measurement accuracy is more important (FARO Focus3D X130). In BIM/HBIM modeling applications, logic dictates that the TLS solution should be chosen, despite the longer data acquisition and processing time, but with a cloud with far better quality parameters that allow objects on the point cloud to be recognized. In a situation where the TLS point cloud is 20 times more dense, it allows to model objects at the appropriate level of geometric detail.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Yadav, S. K. P. Kushwaha, M. Mokros, J. Chudá, M. Pondelík
{"title":"INTEGRATION OF IPHONE LiDAR WITH QUADCOPTER AND FIXED WING UAV PHOTOGRAMMETRY FOR THE FORESTRY APPLICATIONS","authors":"Y. Yadav, S. K. P. Kushwaha, M. Mokros, J. Chudá, M. Pondelík","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-213-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-213-2023","url":null,"abstract":"Abstract. The recent innovations in remote sensing technologies have given rise to the efficient mapping and monitoring of forests. The developments in the sensor implementation have mainly focused on optimizing the payload of the UAV system and allowed the users to acquire the data simultaneously with a range of active and passive sensors like high-resolution RGB cameras and multispectral cameras LiDAR (Laser Imaging Detection and Ranging). The main objective of this research contribution is to combine the Digital Elevation Model (DEMs) from quadcopter Unmanned Aerial Vehicles (UAVs), Fixed Wing UAV-based cameras, and iPhone datasets for the forest plots. The datasets from two vegetation seasons, namely leaf-off and leaf-on, were used to combine the Digital Elevation Models from different data acquisition platforms. This internship research work aims to create and experiment with new methods, techniques, and technologies for the applications of UAV photogrammetry and iPhone LiDAR in forest napping and inventory management. CHMs are also generated in this work which helps assess the conditions of the forests in the recreational areas, and the possibility of solutions like iPhone LiDAR and UAV photogrammetry would be highly efficient and economical. The leaf-off and leaf-on datasets were processed in Agisoft Metashape Professional software to generate dense point clouds for the forest plots. The point cloud from the leaf-on dataset was rasterized to generate a DSM whereas the leaf-off point cloud generated a DSM of the forest plots after ground filtering with Cloth Simulation Filter (CSF) plugin. The iPhone LiDAR point was also rasterized to a DTM product after pre-processing steps and noise removal. The Canopy Height Models (CHMs) were generated by subtracting UAV and iPhone LiDAR based DTMs from the UAV leaf on DSM. Finally, the accuracy assessment of CHMs from UAB datasets and their integration with iPhone LiDAR has been assessed using the accurate tree heights measured during the forest field visits. The proposed methodology can be used for forest mapping purposes where a moderate accuracy is requested.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EVALUATION OF CONSUMER-GRADE AND SURVEY-GRADE UAV-LIDAR","authors":"G. Mandlburger, M. Kölle, F. Pöppl, M. Cramer","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-99-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-99-2023","url":null,"abstract":"Abstract. Driven by developments in the automotive industry, the availability of compact consumer-grade LiDAR (Light Detection and Ranging) sensors has increased significantly in recent years. Some of these sensors are also suitable for UAV-based surveying tasks. This paper first discusses the differences between consumer-grade and survey-grade LiDAR systems. Special attention will be paid to the scanning mechanisms used on the one hand and to different solutions for the transceiver units on the other hand. Based on the technical data of two concrete systems, the consumer-grade DJI Zenmuse L1 sensor and the survey-grade scanner RIEGL VUX-1UAV, the expected effects of the sensor parameters on the 3D point cloud are first discussed theoretically and then verified using an exemplary data set in Hessigheim (Baden-Württemberg, Germany). The analysis shows the possibilities and limitations of consumer-grade LiDAR. Compared to the low-cost sensor, the high-end scanner exhibits lower range measurement noise (5–10 mm) and better 3D point location accuracy. Furthermore, the higher laser beam quality of high-end devices (beam divergence, beam shape) enables more detailed object detection at the same point density. With moderate accuracy requirements of 5–10 cm, however, applications in the geodetic-cartographic context also arise for the considerably less expensive consumer-grade LiDAR systems.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135666805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Schmidt, V. Volland, P. Hübner, D. Iwaszczuk, A. Eichhorn
{"title":"EVALUATING GEOMETRY OF AN INDOOR SCENARIO WITH OCCLUSIONS BASED ON TOTAL STATION MEASUREMENTS OF WALL ELEMENTS","authors":"J. Schmidt, V. Volland, P. Hübner, D. Iwaszczuk, A. Eichhorn","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-183-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-183-2023","url":null,"abstract":"Abstract. Scan2BIM approaches, i.e. the automated reconstruction of building models from point cloud data, is typically evaluated against the same point clouds which are used as input for the reconstruction process. In doing so, the point clouds are often used as ground truth without considering their own inaccuracies. Thus, in this research, we investigate the manual creation of an accurate ground truth, with a process which takes into account the measurement accuracy as well as the modeling accuracy. Therefore we created a ground truth to an existing laser scan data with a total station, based on the assumption that a total station generally measures points more reliably. In addition, a manual selection and classification of points on the wall surfaces during the measurement, serves a reliable detection of the walls via plane fitting. This allows for the creation of a more reliable ground truth, which is determined by the intersection of the planes from corners and edges. The ground truth is aligned parallel to the axes of a local coordinate system. From MLS and TLS point clouds of the same building area, walls are manually classified and corners and edges are determined in a similar way to the total station. These TLS and MLS corners are registered to this ground truth using least squares optimisation at the vertices. The transformation thus determined is used to transform the laser scanning point clouds as well. The resulting errors in the corners and the whole point cloud are evaluated. We conclude that the standard deviation of wall surfaces alone isn’t sufficient to determine the quality of the reconstructed building model. Despite low measurement noise in single wall surfaces, deviations in the reconstructed room model may arise.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Nex, N. Zhang, F. Remondino, E. M. Farella, R. Qin, C. Zhang
{"title":"BENCHMARKING THE EXTRACTION OF 3D GEOMETRY FROM UAV IMAGES WITH DEEP LEARNING METHODS","authors":"F. Nex, N. Zhang, F. Remondino, E. M. Farella, R. Qin, C. Zhang","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-123-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-123-2023","url":null,"abstract":"Abstract. 3D reconstruction from single and multi-view stereo images is still an open research topic, despite the high number of solutions proposed in the last decades. The surge of deep learning methods has then stimulated the development of new methods using monocular (MDE, Monocular Depth Estimation), stereoscopic and Multi-View Stereo (MVS) 3D reconstruction, showing promising results, often comparable to or even better than traditional methods. The more recent development of NeRF (Neural Radial Fields) has further triggered the interest for this kind of solution. Most of the proposed approaches, however, focus on terrestrial applications (e.g., autonomous driving or small artefacts 3D reconstructions), while airborne and UAV acquisitions are often overlooked. The recent introduction of new datasets, such as UseGeo has, therefore, given the opportunity to assess how state-of-the-art MDE, MVS and NeRF 3D reconstruction algorithms perform using airborne UAV images, allowing their comparison with LiDAR ground truth. This paper aims to present the results achieved by two MDE, two MVS and two NeRF approaches levering deep learning approaches, trained and tested using the UseGeo dataset. This work allows the comparison with a ground truth showing the current state of the art of these solutions and providing useful indications for their future development and improvement.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"INVESTIGATION ON THE USE OF NeRF FOR HERITAGE 3D DENSE RECONSTRUCTION FOR INTERIOR SPACES","authors":"A. Murtiyoso, J. Markiewicz, A. K. Karwel, P. Kot","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-115-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-115-2023","url":null,"abstract":"Abstract. The concept of Neural Radiance Fields (NeRF) emerged in recent years as a method to create novel synthetic 3D viewpoints from a set of trained images. While it has several overlaps with conventional photogrammetry and especially multi-view stereo (MVS), its main point of interest is the capability to rapidly recreate objects in 3D. In this paper, we investigate the quality of point clouds generated by state-of-the-art NeRF in the context of interior spaces and compare them to four conventional MVS algorithms, of which two are commercial (Agisoft Metashape and Pix4D) and the other two open source (Patch-Match and Semi-Global Matching). Three synthetic datasets of interior scenes were created from laser scanning data with different characteristics and architectural elements. Results show that NeRF point clouds could achieve satisfactory results geometrically speaking, with an average standard deviation of 1.7 cm in interior cases where the scene dimension is roughly 25–50 m3 in volume. However, the level of noise on the point cloud, which was considered as out of tolerance, ranges between 17–42%, meaning that the level of detail and finesse is most likely insufficient for sophisticated heritage documentation purposes, even though from a visualisation point of view the results were better. However, NeRF did show the capability to reconstruct texture less and reflective surfaces where MVS failed.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 3D INDOOR-OUTDOOR BENCHMARK DATASET FOR LoD3 BUILDING POINT CLOUD SEMANTIC SEGMENTATION","authors":"Y. Cao, M. Scaioni","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-31-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-31-2023","url":null,"abstract":"Abstract. Deep learning (DL) algorithms require high quality training samples as well as accurate and thorough annotations to work effectively. Up until now a limited number of datasets are available to train DL techniques for semantic segmentation of 3D building point clouds, except a few ones focusing on specific categories of constructions (e.g., cultural heritage buildings). This paper presents a new 3D Indoor/Outdoor building dataset (BIO dataset), which is aimed to provide a highly accurate, detailed, and comprehensive dataset to be used for applications related to sematic classification of buildings based on point clouds and meshes. This benchmark dataset contains 100 building models generated from existing polygonal models and belonging to different categories. These include commercial buildings, residential houses, industrial and institutional buildings. Structural elements of buildings are annotated into 11 semantic categories, following standards from IFC and CityGML. To verify the applicability of the BIO dataset for the semantic segmentation task, it has been successfully tested by using one machine learning technique and four different DL algorithms.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Roman, E. M. Farella, S. Rigon, F. Remondino, S. Ricciuti, D. Viesi
{"title":"FROM 3D SURVEYING DATA TO BIM TO BEM: THE INCUBE DATASET","authors":"O. Roman, E. M. Farella, S. Rigon, F. Remondino, S. Ricciuti, D. Viesi","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-175-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-175-2023","url":null,"abstract":"Abstract. In recent years, the improvement of sensors and methodologies for 3D reality-based surveying has exponentially enhanced the possibility of creating digital replicas of the real world. LiDAR technologies and photogrammetry are currently standard approaches for collecting 3D geometric information of indoor and outdoor environments at different scales. This information can potentially be part of a broader processing workflow that, starting from 3D surveyed data and through Building Information Models (BIM) generation, leads to more complex analyses of buildings’ features and behavior (Figure 1). However, creating BIM models, especially of historic and heritage assets (HBIM), is still resource-intensive and time-consuming due to the manual efforts required for data creation and enrichment. Improve 3D data processing, interoperability, and the automation of the BIM generation process are some of the trending research topics, and benchmark datasets are extremely helpful in evaluating newly developed algorithms and methodologies for these scopes. This paper introduces the InCUBE dataset, resulting from the activities of the recently funded EU InCUBE project, focused on unlocking the EU building renovation through integrated strategies and processes for efficient built-environment management (including the use of innovative renewable energy technologies and digitalization). The set of data collects raw and processed data produced for the Italian demo site in the Santa Chiara district of Trento (Italy). The diversity of the shared data enables multiple possible uses, investigations and developments, and some of them are presented in this contribution.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EVALUATING A NADIR AND AN OBLIQUE CAMERA FOR 3D INFRASTRUCTURE (CITY) MODEL GENERATION","authors":"K. G. Nikolakopoulos, A. Kyriou","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-131-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-131-2023","url":null,"abstract":"Abstract. The analysis of Earth’s surface is strongly associated with the creation of three dimensional representations. In light of this, researchers involved in any realm of research as, geological, hydrological, ecological planning, city modelling, civil infrastructure monitoring, disaster management and emergency response, require 3D information of high fidelity and accuracy. For many decades, aerial photos or satellite data and photogrammetry provided the necessary information. In recent years, high-resolution imagery acquired by Unmanned Aerial Vehicles (UAV) has become a cost-efficient and quite accurate solution. In this framework, an infrastructure-monitoring project, named called “PROION”, focuses among others on the generation of very fine and highly accurate 3D infrastructure (city) model. The specific study evaluates a high-resolution nadir camera and an oblique camera for the creation of a 3D representation of the Patras University Campus. During the project, two identical flights over a part of the campus were conducted. The flights were performed with a vertical take-off and landing (Vtol) fixed wind UAV equipped with PPK receiver on-board. Based on the conducted flights, many data sets have been evaluated regarding the accuracy and fidelity. It was proved that both nadir and oblique cameras produced very accurate 3D representations of the University campus buildings. The RMSE error of the nadir imagery is almost two times higher than the respective error of the oblique imagery reaching 30cm.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Padkan, P. Trybala, R. Battisti, F. Remondino, C. Bergeret
{"title":"EVALUATING MONOCULAR DEPTH ESTIMATION METHODS","authors":"N. Padkan, P. Trybala, R. Battisti, F. Remondino, C. Bergeret","doi":"10.5194/isprs-archives-xlviii-1-w3-2023-137-2023","DOIUrl":"https://doi.org/10.5194/isprs-archives-xlviii-1-w3-2023-137-2023","url":null,"abstract":"Abstract. Depth estimation from monocular images has become a prominent focus in photogrammetry and computer vision research. Monocular Depth Estimation (MDE), which involves determining depth from a single RGB image, offers numerous advantages, including applications in simultaneous localization and mapping (SLAM), scene comprehension, 3D modeling, robotics, and autonomous driving. Depth information retrieval becomes especially crucial in situations where other sources like stereo images, optical flow, or point clouds are not available. In contrast to traditional stereo or multi-view methods, MDE techniques require fewer computational resources and smaller datasets. This research work presents a comprehensive analysis and evaluation of some state-of-the-art MDE methods, considering their ability to infer depth information in terrestrial images. The evaluation includes quantitative assessments using ground truth data, including 3D analyses and inference time.","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135729538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}