Norbert Haala , Michael Kölle , Michael Cramer , Dominik Laupheimer , Florian Zimmermann
{"title":"Hybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy","authors":"Norbert Haala , Michael Kölle , Michael Cramer , Dominik Laupheimer , Florian Zimmermann","doi":"10.1016/j.ophoto.2022.100014","DOIUrl":"10.1016/j.ophoto.2022.100014","url":null,"abstract":"<div><p>During the last two decades, UAV emerged as standard platform for photogrammetric data collection. Main motivation in that early phase was the cost effective airborne image collection at areas of limited size. This was already feasible by rather simple payloads like an off-the-shelf, compact camera and a navigation-grade GNSS sensor. Meanwhile, dedicated sensor systems enable applications that have not been feasible in the past. One example is the airborne collection of dense 3D point clouds at millimetre accuracies, which will be discussed in our paper. For this purpose, we collect both LiDAR and image data from a joint UAV platform and apply a so-called hybrid georeferencing. This process integrates photogrammetric bundle block adjustment with direct georeferencing of LiDAR point clouds. By these means georeferencing accuracy is improved for the LiDAR point cloud by an order of magnitude. We demonstrate the feasibility of our approach in the context of a project, which aims on monitoring of subsidence of about 10 mm/year. The respective area of interest is defined by a ship lock and its vicinity of mixed use. In that area, multiple UAV flights were captured and evaluated for a period of three years. As our main contribution, we demonstrate that 3D point accuracies at sub-centimetre level can be achieved. This is realized by joint orientation of laser scans and images in a hybrid adjustment framework, which enables accuracies corresponding to the GSD of the captured imagery.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"4 ","pages":"Article 100014"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000035/pdfft?md5=8693473cff874d6c0ae0f381eed371bf&pid=1-s2.0-S2667393222000035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82934247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Koetsier , Jelena Fiosina , Jan N. Gremmel , Jörg P. Müller , David M. Woisetschläger , Monika Sester
{"title":"Detection of anomalous vehicle trajectories using federated learning","authors":"Christian Koetsier , Jelena Fiosina , Jan N. Gremmel , Jörg P. Müller , David M. Woisetschläger , Monika Sester","doi":"10.1016/j.ophoto.2022.100013","DOIUrl":"10.1016/j.ophoto.2022.100013","url":null,"abstract":"<div><p>Nowadays mobile positioning devices, such as global navigation satellite systems (GNSS) but also external sensor technology like cameras allow an efficient online collection of trajectories, which reflect the behavior of moving objects, such as cars. The data can be used for various applications, e.g., traffic planning or updating maps, which need many trajectories to extract and infer the desired information, especially when machine or deep learning approaches are used. Often, the amount and diversity of necessary data exceeds what can be collected by individuals or even single companies. Currently, data owners, e.g., vehicle producers or service operators, are reluctant to share data due to data privacy rules or because of the risk of sharing information with competitors, which could jeopardize the data owner's competitive advantage. A promising approach to exploit data from several data owners, but still not directly accessing the data, is the concept of federated learning, that allows collaborative learning without exchanging raw data, but only model parameters.</p><p>In this paper, we address the problem of anomaly detection in vehicle trajectories, and investigate the benefits of using federated learning. To this end, we apply several state-of-the-art learning algorithms like one-class support vector machine (OCSVM) and isolation forest, thus solving a one-class classification problem. Based on these learning mechanisms, we successfully proposed and verified a federated architecture for the collaborative identification of anomalous trajectories at several intersections. We demonstrate that the federated approach is beneficial not only to improve the overall anomaly detection accuracy, but also for each individual data owner. The experiments show that federated learning allows to increase the anomaly detection accuracy from in average AUC-ROC scores of 97% by individual intersections up to 99% using cooperation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"4 ","pages":"Article 100013"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000023/pdfft?md5=c05f69f23b9ea7487ed2ef3da9993685&pid=1-s2.0-S2667393222000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82330255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyi Feng , Aimad El Issaoui , Matti Lehtomäki , Matias Ingman , Harri Kaartinen , Antero Kukko , Joona Savela , Hannu Hyyppä , Juha Hyyppä
{"title":"Pavement distress detection using terrestrial laser scanning point clouds – Accuracy evaluation and algorithm comparison","authors":"Ziyi Feng , Aimad El Issaoui , Matti Lehtomäki , Matias Ingman , Harri Kaartinen , Antero Kukko , Joona Savela , Hannu Hyyppä , Juha Hyyppä","doi":"10.1016/j.ophoto.2021.100010","DOIUrl":"10.1016/j.ophoto.2021.100010","url":null,"abstract":"<div><p>In this paper, we compared five crack detection algorithms using terrestrial laser scanner (TLS) point clouds. The methods are developed based on common point cloud processing knowledge in along- and across-track profiles, surface fitting or local pointwise features, with or without machine learning. The crack area and volume were calculated from the crack points detected by the algorithms. The completeness, correctness, and F<sub>1</sub> score of each algorithm were computed against manually collected references. Ten 1-m-by-3.5-m plots containing 75 distresses of six distress types (depression, disintegration, pothole, longitudinal, transverse, and alligator cracks) were selected to explain variability of distresses from a 3-km-long-road. For crack detection at plot level, the best algorithm achieved a completeness of up to 0.844, a correctness of up to 0.853, and an F<sub>1</sub> score of up to 0.849. The best algorithm’s overall (ten plots combined) completeness, correctness, and F<sub>1</sub> score were 0.642, 0.735, and 0.685 respectively. For the crack area estimation, the overall mean absolute percentage errors (MAPE) of the two best algorithms were 19.8% and 20.3%. In the crack volume estimation, the two best algorithms resulted in 19.3% and 14.5% MAPE. When the plots were grouped based on crack detection complexity, in the ‘easy’ category, the best algorithm reached a crack area estimation MAPE of 8.9%, while for crack volume estimation, the MAPE obtained from the best algorithm was 0.7%.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"3 ","pages":"Article 100010"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000107/pdfft?md5=09bfe8cd60354c21430255cf71ad1419&pid=1-s2.0-S2667393221000107-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84208023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Risto Kaijaluoto , Antero Kukko , Aimad El Issaoui , Juha Hyyppä , Harri Kaartinen
{"title":"Semantic segmentation of point cloud data using raw laser scanner measurements and deep neural networks","authors":"Risto Kaijaluoto , Antero Kukko , Aimad El Issaoui , Juha Hyyppä , Harri Kaartinen","doi":"10.1016/j.ophoto.2021.100011","DOIUrl":"10.1016/j.ophoto.2021.100011","url":null,"abstract":"<div><p>Deep learning methods based on convolutional neural networks have shown to give excellent results in semantic segmentation of images, but the inherent irregularity of point cloud data complicates their usage in semantically segmenting 3D laser scanning data. To overcome this problem, point cloud networks particularly specialized for the purpose have been implemented since 2017 but finding the most appropriate way to semantically segment point clouds is still an open research question. In this study we attempted semantic segmentation of point cloud data with convolutional neural networks by using only the raw measurements provided by a multiple echo detection capable profiling laser scanner. We formatted the measurements to a series of 2D rasters, where each raster contains the measurements (range, reflectance, echo deviation) of a single scanner mirror rotation to be able to use the rich research done on semantic segmentation of 2D images with convolutional neural networks. Similar approach for profiling laser scanner in forest context has never been proposed before. A boreal forest in Evo region near Hämeenlinna in Finland was used as experimental study area. The data was collected with FGI Akhka-R3 backpack laser scanning system, georeferenced and then manually labelled to ground, understorey, tree trunk and foliage classes for training and evaluation purposes. The labelled points were then transformed back to 2D rasters and used for training three different neural network architectures. Further, the same georeferenced data in point cloud format was used for training the state-of-the-art point cloud semantic segmentation network RandLA-Net and the results were compared with those of our method. Our best semantic segmentation network reached the mean Intersection-over-Union value of 80.1% and it is comparable to the 80.6% reached by the point cloud -based RandLA-Net. The numerical results and visual analysis of the resulting point clouds show that our method is a valid way of doing semantic segmentation of point clouds at least in the forest context. The labelled datasets were also released to the research community.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"3 ","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000119/pdfft?md5=8ea1ec85b081902764753675bec71cac&pid=1-s2.0-S2667393221000119-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74155914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy Mayer , Ate Poortinga , Biplov Bhandari , Andrea P. Nicolau , Kel Markert , Nyein Soe Thwal , Amanda Markert , Arjen Haag , John Kilbride , Farrukh Chishtie , Amit Wadhwa , Nicholas Clinton , David Saah
{"title":"Deep learning approach for Sentinel-1 surface water mapping leveraging Google Earth Engine","authors":"Timothy Mayer , Ate Poortinga , Biplov Bhandari , Andrea P. Nicolau , Kel Markert , Nyein Soe Thwal , Amanda Markert , Arjen Haag , John Kilbride , Farrukh Chishtie , Amit Wadhwa , Nicholas Clinton , David Saah","doi":"10.1016/j.ophoto.2021.100005","DOIUrl":"10.1016/j.ophoto.2021.100005","url":null,"abstract":"<div><p>Satellite remote sensing plays an important role in mapping the location and extent of surface water. A variety of approaches are available for mapping surface water, but deep learning approaches are not commonplace as they are ‘data hungry’ and require large amounts of computational resources. However, with the availability of various satellite sensors and rapid development in cloud computing, the remote sensing scientific community is adapting modern deep learning approaches. The new integration of cloud-based Google AI platform and Google Earth Engine enables users to deploy calculations at scale. In this paper, we investigate two methods of automatic data labeling: 1. the Joint Research Centre (JRC) surface water maps; 2. an Edge-Otsu dynamic threshold approach. We deployed a U-Net convolutional neural network to map surface water from Sentinel-1 Synthetic Aperture Radar (SAR) data and tested the model performance using different hyperparameter tuning combinations to identify the optimal learning rate and loss function. The performance was then evaluated using an independent validation data set. We tested 12 models overall and found that the models utilizing the JRC data labels showed a better model performance, with F1-scores ranging from 0.972 to 0.986 for the training test and validation efforts. Additionally, an independently sampled high-resolution data set was used to further evaluate model performance. From this independent validation effort we observed models leveraging JRC data labels produced F1-Scores ranging from 0.9130.922. A pairwise comparison of models, through varying input data, learning rates, and loss functions constituents, revealed the JRC Adjusted Binary Cross Entropy Dice model to be statistically different than the 66 other model combinations and displayed the highest relative evaluations metrics including accuracy, precision score, Cohen Kappa coefficient, and F1-score. These results are in the same range as many of the conventional methods. We observed that the integration of Google AI Platform into Google Earth Engine can be a powerful tool to deploy deep-learning algorithms at scale and that automatic data labeling can be an effective strategy in the development of deep-learning models, however independent data validation remains an important step in model evaluation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100005"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000053/pdfft?md5=89bd0904ac557bdf32d929ccca7af5da&pid=1-s2.0-S2667393221000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83452163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kourosh Khoshelham , Ha Tran , Debaditya Acharya , Lucia Díaz Vilariño , Zhizhong Kang , Sagi Dalyot
{"title":"Results of the ISPRS benchmark on indoor modelling","authors":"Kourosh Khoshelham , Ha Tran , Debaditya Acharya , Lucia Díaz Vilariño , Zhizhong Kang , Sagi Dalyot","doi":"10.1016/j.ophoto.2021.100008","DOIUrl":"10.1016/j.ophoto.2021.100008","url":null,"abstract":"<div><p>This paper reports the results of the ISPRS benchmark on indoor modelling. Reconstructed models submitted by 11 participating teams are evaluated on a dataset comprising 6 point clouds representing indoor environments of different complexity. The evaluation is based on measuring the completeness, correctness, and accuracy of the reconstructed wall elements through comparison with manually generated reference models. The results show that the performance of the methods varies across different datasets, but generally the reconstruction methods achieve better results for the point clouds with higher accuracy and density and fewer gaps, as well as the point clouds representing less complex environments. Filtering clutter points in a pre-processing step contributes to higher correctness, and making strong assumptions on the shape of the reconstructed elements contributes to higher completeness and accuracy for models of Manhattan World environments.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100008"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000089/pdfft?md5=5bfa00dcd11021f1de42c95c2d66cb2a&pid=1-s2.0-S2667393221000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86363255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinxiang Zhu (Ph.D. candidate) Sean , Craig L. Glennie Ph.D., P.Eng. , Benjamin A. Brooks Ph.D. , Todd L. Ericksen M.S., P.Eng.
{"title":"Monitoring aseismic fault creep using persistent urban geodetic markers generated from mobile laser scanning","authors":"Xinxiang Zhu (Ph.D. candidate) Sean , Craig L. Glennie Ph.D., P.Eng. , Benjamin A. Brooks Ph.D. , Todd L. Ericksen M.S., P.Eng.","doi":"10.1016/j.ophoto.2021.100009","DOIUrl":"10.1016/j.ophoto.2021.100009","url":null,"abstract":"<div><p>High resolution and high accuracy distributed detection of fault creep deformation remains challenging given limited observations and associated change detection strategies. A mobile laser scanning-based change detection method that is capable of measuring centimeter-level near-field (<span><math><mo><</mo><mn>150</mn></math></span> m from fault) deformation is described. The methodology leverages the use of man-made features in the built environment as geodetic markers that can be temporally tracked. The proposed framework consists of a RANSAC-based corresponding plane detector and a combined least squares displacement estimator. Using repeat mobile laser scanning data collected in 2015 and 2017 on a 2 km segment of the Hayward fault, near-field fault creep displacement and non-linear creep deformation are estimated. The detection results reveal 2.5 ± 1.5 cm of accumulated fault parallel creep displacement in the far-field. The laser scanning estimates of displacement match collocated alinement array observations at the 4 mm level in the near field. The proposed change detection framework is shown to be accurate and practical for fault creep displacement detection in the near field and the detected non-linear creep displacement patterns will help elucidate the complex physics of surface faulting.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100009"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000090/pdfft?md5=9dbc42c227f1d8a9bb0569ac5a8181f1&pid=1-s2.0-S2667393221000090-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80246088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Hyyppä , Jesse Muhojoki , Xiaowei Yu , Antero Kukko , Harri Kaartinen , Juha Hyyppä
{"title":"Efficient coarse registration method using translation- and rotation-invariant local descriptors towards fully automated forest inventory","authors":"Eric Hyyppä , Jesse Muhojoki , Xiaowei Yu , Antero Kukko , Harri Kaartinen , Juha Hyyppä","doi":"10.1016/j.ophoto.2021.100007","DOIUrl":"10.1016/j.ophoto.2021.100007","url":null,"abstract":"<div><p>In this paper, we present a simple, efficient, and robust algorithm for 2D coarse registration of two point clouds. In the proposed algorithm, the locations of some distinct objects are detected from the point cloud data, and a rotation- and translation-invariant feature descriptor vector is computed for each of the detected objects based on the relative locations of the neighboring objects. Subsequently, the feature descriptors obtained for the different point clouds are compared against one another by using the Euclidean distance in the feature space as the similarity criterion. By using the nearest neighbor distance ratio, the most promising matching object pairs are found and further used to fit the optimal Euclidean transformation between the two point clouds. Importantly, the time complexity of the proposed algorithm scales quadratically in the number of objects detected from the point clouds. We demonstrate the proposed algorithm in the context of forest inventory by performing coarse registration between terrestrial and airborne point clouds. To this end, we use trees as the objects and perform the coarse registration by using no other information than the locations of the detected trees. We evaluate the performance of the algorithm using both simulations and three test sites located in a boreal forest. We show that the algorithm is fast and performs well for a large range of stem densities and for test sites with up to 10 000 trees. Additionally, we show that the algorithm works reliably even in the case of moderate errors in the tree locations, commission and omission errors in the tree detection, and partial overlap of the data sets. We also demonstrate that additional tree attributes can be incorporated into the proposed feature descriptor to improve the robustness of the registration algorithm provided that reliable information of these additional tree attributes is available. Furthermore, we show that the registration accuracy between the terrestrial and airborne point clouds can be significantly improved if stem positions estimated from the terrestrial data are matched to stem positions obtained from the airborne data instead of matching them to tree top positions estimated from the airborne data. Even though the 2D coarse registration algorithm is demonstrated in the context of forestry, the algorithm is not restricted to forest data and it may potentially be utilized in other applications, in which efficient 2D point set registration is needed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100007"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000077/pdfft?md5=f4cc4071095a78abf53972f77305e84a&pid=1-s2.0-S2667393221000077-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91150857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Masoud Rahimi , Kourosh Khoshelham , Mark Stevenson , Stephan Winter
{"title":"Pose-aware monocular localization of occluded pedestrians in 3D scene space","authors":"Mohammad Masoud Rahimi , Kourosh Khoshelham , Mark Stevenson , Stephan Winter","doi":"10.1016/j.ophoto.2021.100006","DOIUrl":"10.1016/j.ophoto.2021.100006","url":null,"abstract":"<div><p>Localization of pedestrians in 3D scene space from single RGB images is critical for various downstream applications. Current monocular approaches employ either the bounding box of pedestrians or the visible parts of their bodies for localization. Both approaches introduce additional error to the location estimation in the case of real-world scenarios – crowded environments with multiple occluded pedestrians. To overcome the limitation, this paper proposes a novel human pose-aware pedestrian localization framework to model poses of occluded pedestrians, where this enables accurate localization in image and ground space. This is done by proposing a light-weight neural network architecture, where this ensures a fast and accurate prediction of missing body parts for downstream applications. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of the framework compared to state-of-the-art in predicting pedestrians missing body parts as well as pedestrian localization.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100006"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000065/pdfft?md5=9eb7fb438c5548be0151101048d7d41b&pid=1-s2.0-S2667393221000065-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77996785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Kölle , Dominik Laupheimer , Stefan Schmohl , Norbert Haala , Franz Rottensteiner , Jan Dirk Wegner , Hugo Ledoux
{"title":"The Hessigheim 3D (H3D) benchmark on semantic segmentation of high-resolution 3D point clouds and textured meshes from UAV LiDAR and Multi-View-Stereo","authors":"Michael Kölle , Dominik Laupheimer , Stefan Schmohl , Norbert Haala , Franz Rottensteiner , Jan Dirk Wegner , Hugo Ledoux","doi":"10.1016/j.ophoto.2021.100001","DOIUrl":"10.1016/j.ophoto.2021.100001","url":null,"abstract":"<div><p>Automated semantic segmentation and object detection are of great importance in geospatial data analysis. However, supervised machine learning systems such as convolutional neural networks require large corpora of annotated training data. Especially in the geospatial domain, such datasets are quite scarce. Within this paper, we aim to alleviate this issue by introducing a new annotated 3D dataset that is unique in three ways: i) The dataset consists of both an Unmanned Aerial Vehicle (UAV) laser scanning point cloud and a 3D textured mesh. ii) The point cloud features a mean point density of about 800 pts/m<sup>2</sup> and the oblique imagery used for 3D mesh texturing realizes a ground sampling distance of about 2–3 cm. This enables the identification of fine-grained structures and represents the state of the art in UAV-based mapping. iii) Both data modalities will be published for a total of three epochs allowing applications such as change detection. The dataset depicts the village of Hessigheim (Germany), henceforth referred to as H3D - either represented as 3D point cloud H3D(PC) or 3D mesh H3D(Mesh). It is designed to promote research in the field of 3D data analysis on one hand and to evaluate and rank existing and emerging approaches for semantic segmentation of both data modalities on the other hand. Ultimately, we hope that H3D will become a widely used benchmark dataset in company with the well-established ISPRS Vaihingen 3D Semantic Labeling Challenge benchmark (V3D). The dataset can be downloaded from <span>https://ifpwww.ifp.uni-stuttgart.de/benchmark/hessigheim/default.aspx</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"1 ","pages":"Article 100001"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ophoto.2021.100001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81054911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}