Olli Winberg , Jiri Pyörälä , Xiaowei Yu , Harri Kaartinen , Antero Kukko , Markus Holopainen , Johan Holmgren , Matti Lehtomäki , Juha Hyyppä
{"title":"Branch information extraction from Norway spruce using handheld laser scanning point clouds in Nordic forests","authors":"Olli Winberg , Jiri Pyörälä , Xiaowei Yu , Harri Kaartinen , Antero Kukko , Markus Holopainen , Johan Holmgren , Matti Lehtomäki , Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100040","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100040","url":null,"abstract":"<div><p>We showed that a mobile handheld laser scanner (HHLS) provides useful features concerning the wood quality-influencing external structures of trees. When linked with wood properties measured at a sawmill utilizing state-of-the-art X-ray scanners, these data enable the training of various wood quality models for use in targeting and planning future wood procurement. A total of 457 Norway spruce sample trees (<em>Picea abies</em> (L.) H. Karst.) from 13 spruce-dominated stands in southeastern Finland were used in the study. All test sites were recorded with a ZEB Horizon HHLS, and the sample trees were tracked to a sawmill and subjected to X-rays. Two branch extraction techniques were applied to the HHLS point clouds: 1) a method developed in this study that was based on the density-based spatial clustering of applications with noise (DBSCAN) and 2) segmentation-based quantitative structure model (treeQSM). On average, the treeQSM method detected 46% more branches per tree than the DBSCAN did. However, compared with the X-rayed references, some of the branches detected by the treeQSM may either be false positives or so small in size that the X-rays are unable to detect them as knots, as the method overestimated the whorl count by 19% when compared with the X-rays. On the other hand, the DBSCAN method only detected larger branches and showed a −11% bias in whorl count. Overall, the DBSCAN underestimated knot volumes within trees by 6%, while the treeQSM overestimated them by 25%. When we input the HHLS features into a Random Forest model, the knottiness variables measured at the sawmill were predicted with R<sup>2</sup>s of 0.47–0.64. The results were comparable with previous results derived with the static terrestrial laser scanners. The obtained stem branching data are relevant for predicting wood quality attributes but do not provide data that are directly comparable with the X-ray features. Future work should combine terrestrial point clouds with dense above-canopy point clouds to overcome the limitations related to vertical coverage.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melanie Elias , Alexandra Weitkamp , Anette Eltner
{"title":"Multi-modal image matching to colorize a SLAM based point cloud with arbitrary data from a thermal camera","authors":"Melanie Elias , Alexandra Weitkamp , Anette Eltner","doi":"10.1016/j.ophoto.2023.100041","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100041","url":null,"abstract":"<div><p>Thermal mapping of buildings can be one approach to assess the insulation, which is important in regard to upgrade buildings to increase energy efficiency and for climate change adaptation. Personal laser scanning (PLS) is a fast and flexible option that has become increasingly popular to efficiently map building facades. However, some measurement systems do not include sufficient colorization of the point cloud. In order to detect, map and reference any damages to building facades, it is of great interest to transfer images from RGB and thermal infrared (TIR) cameras to the point cloud. This study aims to answer the research question if a flexible tool can be developed, which enable such measurements with high spatial resolution and flexibility. Therefore, an image-to-geometry registration approach for rendered point clouds is combined with a deep learning (DL)-based image feature matcher to estimate the camera pose of arbitrary images in relation to the geometry, i.e. the point cloud, to map color information. We developed a research design for multi-modal image matching to investigate the alignment of RGB and TIR camera images to a PLS point cloud with intensity information using calibrated and un-calibrated images. The accuracies of the estimated pose parameters reveal the best performance of the registration for pre-calibrated, i.e. undistorted, RGB camera images. The alignment of un-calibrated RGB and TIR camera images to a point cloud is possible if sufficient and well-distributed 2D-3D feature matches between image and point cloud are available. Our workflow enables the colorization of point clouds with high accuracy using images with very different radiometric characteristics and image resolutions. Only a rough approximation of the camera pose is required and hence the approach reliefs strict sensor synchronization requirements.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Point cloud registration for LiDAR and photogrammetric data: A critical synthesis and performance analysis on classic and deep learning algorithms","authors":"Ningli Xu , Rongjun Qin Ph.D. , Shuang Song","doi":"10.1016/j.ophoto.2023.100032","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100032","url":null,"abstract":"<div><p>Three-dimensional (3D) point cloud registration is a fundamental step for many 3D modeling and mapping applications. Existing approaches are highly disparate in the data source, scene complexity, and application, therefore the current practices in various point cloud registration tasks are still ad-hoc processes. Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100032"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precision estimation of 3D objects using an observation distribution model in support of terrestrial laser scanner network design","authors":"D.D. Lichti , T.O. Chan , Kate Pexman","doi":"10.1016/j.ophoto.2023.100035","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100035","url":null,"abstract":"<div><p>First order geometric network design is an important quality assurance process for terrestrial laser scanning of complex built environments for the construction of digital as-built models. A key design task is the determination of a set of instrument locations or viewpoints that provide complete site coverage while meeting quality criteria. Although simplified point precision measures are often used in this regard, precision measures for common geometric objects found in the built environment—planes, cylinders and spheres—are arguably more relevant indicators of as-built model quality. The computation of such measures at the design stage—which is not currently done—requires generation of artificial observations by ray casting, which can be a dissuasive factor for their adoption. This paper presents models for the rigorous computation of geometric object precision without the need for ray casting. Instead, a model for the 2D distribution of angular observations is coupled with candidate viewpoint-object geometry to derive the covariance matrix of parameters. Three-dimensional models are developed and tested for vertical cylinders, spheres and vertical, horizontal and tilted planes. Precision estimates from real experimental data were used as the reference for assessing the accuracy of the predicted precision—specifically the standard deviation—of the parameters of these objects. Results show that the mean accuracy of the model-predicted precision was 4.3% (of the read data value) or better for the planes, regardless of plane orientation. The mean accuracy of the cylinders was up to 6.2%. Larger differences were found for some datasets due to incomplete object coverage with the reference data, not due to the model. Mean precision for the spheres was similar, up to 6.1%, following adoption of a new model for deriving the angular scanning limits. The computational advantage of the proposed method over precision estimates from simulated, high-resolution point clouds is also demonstrated. The CPU time required to estimate precision can be reduced by up to three orders of magnitude. These results demonstrate the utility of the derived models for efficiently determining object precision in 3D network design in support of scanning surveys for reality capture.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards global scale segmentation with OpenStreetMap and remote sensing","authors":"Munazza Usmani , Maurizio Napolitano , Francesca Bovolo","doi":"10.1016/j.ophoto.2023.100031","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100031","url":null,"abstract":"<div><p>Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100031"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniele la Cecilia , Manu Tom , Christian Stamm , Daniel Odermatt
{"title":"Pixel-based mapping of open field and protected agriculture using constrained Sentinel-2 data","authors":"Daniele la Cecilia , Manu Tom , Christian Stamm , Daniel Odermatt","doi":"10.1016/j.ophoto.2023.100033","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100033","url":null,"abstract":"<div><p>Protected agriculture boosts the production of vegetables, berries and fruits, and it plays a pivotal role in guaranteeing food security globally in the face of climate change. Remote sensing is proven to be useful for identifying the presence of (low-tech) plastic greenhouses and plastic mulches. However, the classification accuracy notoriously decreases in the presence of small-scale farming, heterogeneous land cover and unaccounted seasonal management of protected agriculture. Here, we present the random forest-based pixel-level Open field and Protected Agriculture land cover Classifier (OPAC) developed using Sentinel-2 L2A data. OPAC is trained using tiles from Switzerland over 2 years and the Almeria region in Spain over 1 acquisition day. OPAC classifies eight land covers typical of open field and protected agriculture (plastic mulches, low-tech greenhouses and for the first time high-tech greenhouses). Finally, we assess (1) how the land covers in OPAC are labelled in the Sentinel-2 Scene Classification Layer (SCL) and (2) the correspondence between pixels classified as protected agriculture by OPAC and by the best performing Advanced Plastic Greenhouse Index (APGI). To reduce anthropogenic land covers, we constrain the classification task to agricultural areas retrieved from cadastral data or the Corine Land Cover map. The 5-fold cross-validation reveals an overall accuracy of 92% but other classification scores are moderate when keeping the separation among the three classes of protected agriculture. However, all scores substantially improve upon grouping the three classes into one (with an Intersection Over Union of 0.58 as an average among the scores of the three classes and of 0.98 for one single class). Given the recently acknowledged importance of Sentinel-2 Band 1 (central wavelength of 443 nm), the classification accuracy of OPAC for the Swiss small-scale farming is mostly limited by the band's reduced spatial accuracy (60 m). A careful visual assessment indicates that OPAC achieves satisfactory generalization capabilities also in North European (the Netherlands) and four Mediterranean areas (Spain, Italy, Crete and Turkey) without the need of adding location and temporal specific information. There is good agreement among natural land covers classified by OPAC and the SCL. However, the SCL does not have a class for protected agriculture, the latter being often classified as clouds. APGI achieved similar to lower classification accuracies than OPAC. Importantly, the APGI classification task depends on a user-defined space- and time-specific threshold, whereas OPAC does not. Therefore, OPAC paves the way for rapid mapping of protected agriculture at continental scale.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards complete tree crown delineation by instance segmentation with Mask R–CNN and DETR using UAV-based multispectral imagery and lidar data","authors":"S. Dersch , A. Schöttl , P. Krzystek , M. Heurich","doi":"10.1016/j.ophoto.2023.100037","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100037","url":null,"abstract":"<div><p>Precise single tree delineation allows for a more reliable determination of essential parameters such as tree species, height and vitality. Methods of instance segmentation are powerful neural networks for detecting and segmenting single objects and have the potential to push the accuracy of tree segmentation methods to a new level. In this study, two instance segmentation methods, Mask R–CNN and DETR, were applied to precisely delineate single tree crowns using multispectral images and images generated from UAV lidar data. The study area was in Bavaria, 35 km north of Munich (Germany), comprising a mixed forest stand of around 7 ha characterised mainly by Norway spruce (<em>Picea abies</em>) and large groups of European beeches (<em>Fagus sylvatica</em>) with 181–236 trees per ha. The data set, consisting of multispectral images and lidar data, was acquired using a Micasense RedEdge-MX dual camera system and a Riegl miniVUX-1UAV lidar scanner, both mounted on a hexacopter (DJI Matrice 600 Pro). At an altitude of approximately 85 m, two flight missions were conducted at an airspeed of 5 m/s, leading to a ground resolution of 5 cm and a lidar point density of 560 points/<em>m</em><sup>2</sup>. In total, 1408 trees were marked by visual interpretation of the remote sensing data for training and validating the classifiers. Additionally, 125 trees were surveyed by tacheometric means used to test the optimized neural networks. The evaluations showed that segmentation using only multispectral imagery performed slightly better than with images generated from lidar data. In terms of F1 score, Mask R–CNN with color infrared (CIR) images achieved 92% in coniferous, 85% in deciduous and 83% in mixed stands. Compared to the images generated by lidar data, these scores are the same for coniferous and slightly worse for deciduous and mixed plots, by 4% and 2%, respectively. DETR with CIR images achieved 90% in coniferous, 81% in deciduous and 84% in mixed stands. These scores were 2%, 1%, and 2% worse, respectively, compared to the lidar data images in the same test areas. Interestingly, four conventional segmentation methods performed significantly worse than CIR-based and lidar-based instance segmentations. Additionally, the results revealed that tree crowns were more accurately segmented by instance segmentation. All in all, the results highlight the practical potential of the two deep learning-based tree segmentation methods, especially in comparison to baseline methods.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Schiefer , Sebastian Schmidtlein , Annett Frick , Julian Frey , Randolf Klinke , Katarzyna Zielewska-Büttner , Samuli Junttila , Andreas Uhl , Teja Kattenborn
{"title":"UAV-based reference data for the prediction of fractional cover of standing deadwood from Sentinel time series","authors":"Felix Schiefer , Sebastian Schmidtlein , Annett Frick , Julian Frey , Randolf Klinke , Katarzyna Zielewska-Büttner , Samuli Junttila , Andreas Uhl , Teja Kattenborn","doi":"10.1016/j.ophoto.2023.100034","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100034","url":null,"abstract":"<div><p>Increasing tree mortality due to climate change has been observed globally. Remote sensing is a suitable means for detecting tree mortality and has been proven effective for the assessment of abrupt and large-scale stand-replacing disturbances, such as those caused by windthrow, clear-cut harvesting, or wildfire. Non-stand replacing tree mortality events (e.g., due to drought) are more difficult to detect with satellite data – especially across regions and forest types. A common limitation for this is the availability of spatially explicit reference data. To address this issue, we propose an automated generation of reference data using uncrewed aerial vehicles (UAV) and deep learning-based pattern recognition. In this study, we used convolutional neural networks (CNN) to semantically segment crowns of standing dead trees from 176 UAV-based very high-resolution (<4 cm) RGB-orthomosaics that we acquired over six regions in Germany and Finland between 2017 and 2021. The local-level CNN-predictions were then extrapolated to landscape-level using Sentinel-1 (i.e., backscatter and interferometric coherence), Sentinel-2 time series, and long short term memory networks (LSTM) to predict the cover fraction of standing deadwood per Sentinel-pixel. The CNN-based segmentation of standing deadwood from UAV imagery was accurate (F1-score = 0.85) and consistent across the different study sites and years. Best results for the LSTM-based extrapolation of fractional cover of standing deadwood using Sentinel-1 and -2 time series were achieved using all available Sentinel-1 and --2 bands, kernel normalized difference vegetation index (kNDVI), and normalized difference water index (NDWI) (Pearson’s r = 0.66, total least squares regression slope = 1.58). The landscape-level predictions showed high spatial detail and were transferable across regions and years. Our results highlight the effectiveness of deep learning-based algorithms for an automated and rapid generation of reference data for large areas using UAV imagery. Potential for improving the presented upscaling approach was found particularly in ensuring the spatial and temporal consistency of the two data sources (e.g., co-registration of very high-resolution UAV data and medium resolution satellite data). The increasing availability of publicly available UAV imagery on sharing platforms combined with automated and transferable deep learning-based mapping algorithms will further increase the potential of such multi-scale approaches.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100034"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Haakana, Sakari Tuominen, Juha Heikkinen, Mikko Peltoniemi, Aleksi Lehtonen
{"title":"Spatial patterns of biomass change across Finland in 2009–2015","authors":"Markus Haakana, Sakari Tuominen, Juha Heikkinen, Mikko Peltoniemi, Aleksi Lehtonen","doi":"10.1016/j.ophoto.2023.100036","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100036","url":null,"abstract":"<div><p>Forest characteristics vary largely at the regional level and in smaller geographic areas in Finland. The amount of greenhouse gas emissions is related to changes in biomass and the soil type (e.g. upland soils vs. peatlands). In this paper, estimating and explaining spatial patterns of tree biomass change across Finland was the main interest. We analysed biomass changes on different soil and site types between the years 2009 and 2015 using the Finnish multi-source national forest inventory (MS-NFI) raster layers. MS-NFI method is based on combining information from satellite imagery, digital maps and national forest inventory (NFI) field data. Automatic segmentation was used to create silvicultural management and treatment units. An average biomass estimate of the segmented MS-NFI (MS–NFI–seg) map was 73.9 tons ha<sup>−1</sup> compared to the national forest inventory estimate of 76.5 tons ha<sup>−1</sup> in 2015. Forest soil type had a similar effect on average biomass in MS–NFI–seg and NFI data. Despite good regional and country-level results, segmentation narrowed the biomass distributions. Hence, biomass changes on segments can be considered only approximate values; also, those small differences in average biomass may accumulate when map layers from more than one time point are compared. A kappa of 0.44 was achieved for precision when comparing undisturbed and disturbed forest stands in the segmented Global Forest Change data (GFC-seg) and MS–NFI–seg map. Compared to NFI, 69% and 62% of disturbed areas were detected by GFC-seg and MS–NFI–seg, respectively. Spatially accurate map data of biomass changes on forest land improve the ability to suggest optimal management alternatives for any patch of land, e.g. in terms of climate change mitigation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of lidar-based gridded DEM uncertainty with varying terrain roughness and point density","authors":"Luyen K. Bui , Craig L. Glennie","doi":"10.1016/j.ophoto.2022.100028","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100028","url":null,"abstract":"<div><p>Light detection and ranging (lidar) scanning systems can be used to provide a point cloud with high quality and point density. Gridded digital elevation models (DEMs) interpolated from laser scanning point clouds are widely used due to their convenience, however, DEM uncertainty is rarely provided. This paper proposes an end-to-end workflow to quantify the uncertainty (i.e., standard deviation) of a gridded lidar-derived DEM. A benefit of the proposed approach is that it does not require independent validation data measured by alternative means. The input point cloud requires per point uncertainty which is derived from lidar system observational uncertainty. The propagated uncertainty caused by interpolation is then derived by the general law of propagation of variances (GLOPOV) with simultaneous consideration of both horizontal and vertical point cloud uncertainties. Finally, the interpolated uncertainty is then scaled by point density and a measure of terrain roughness to arrive at the final gridded DEM uncertainty. The proposed approach is tested with two lidar datasets measured in Waikoloa, Hawaii, and Sitka, Alaska. Triangulated irregular network (TIN) interpolation is chosen as the representative gridding approach. The results indicate estimated terrain roughness/point density scale factors ranging between 1 (in flat areas) and 7.6 (in high roughness areas), with a mean value of 2.3 for the Waikoloa dataset and between 1 and 9.2 with a mean value of 1.2 for the Sitka dataset. As a result, the final gridded DEM uncertainties are estimated between 0.059 m and 0.677 m with a mean value of 0.164 m for the Waikoloa dataset and between 0.059 m and 1.723 m with a mean value of 0.097 m for the Sitka dataset.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"7 ","pages":"Article 100028"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}