Julian R. Rice , G. Andrew Fricker , Jonathan Ventura
{"title":"An end-to-end deep learning solution for automated LiDAR tree detection in the urban environment","authors":"Julian R. Rice , G. Andrew Fricker , Jonathan Ventura","doi":"10.1016/j.ophoto.2025.100092","DOIUrl":"10.1016/j.ophoto.2025.100092","url":null,"abstract":"<div><div>Cataloging and classifying trees in the urban environment is a crucial step in urban and environmental planning; however, manual collection and maintenance of this data is expensive and time-consuming. Although algorithmic approaches that rely on remote sensing data have been developed for tree detection in forests, they generally struggle in the more varied urban environment. This work proposes a novel end-to-end deep learning method for the detection of trees in the urban environment from remote sensing data. Specifically, we develop and train a novel PointNet-based neural network architecture to predict tree locations directly from LiDAR data augmented with multi-spectral imagery. We compare this model to a number of high-performing baselines on a large and varied dataset in the Southern California region, and find that our method outperforms all baselines in terms of tree detection ability (75.5% F-score) and positional accuracy (2.28 meter root mean squared error), while being highly efficient. We then analyze and compare the sources of errors, and how these reveal the strengths and weaknesses of each approach. Our results highlight the importance of fusing spectral and structural information for remote sensing tasks in complex urban environments.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100092"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rong Zhao , Shijuan Gao , Kun Zhang , Defang Li , Yi Li
{"title":"A method for extracting water surface and hydrophytic vegetation from ICESat-2 data in wetlands","authors":"Rong Zhao , Shijuan Gao , Kun Zhang , Defang Li , Yi Li","doi":"10.1016/j.ophoto.2025.100097","DOIUrl":"10.1016/j.ophoto.2025.100097","url":null,"abstract":"<div><div>The Ice, Cloud, and Land Elevation Satellite-2 provides a great opportunity to measure water surface and hydrophytic vegetation in complex wetlands. Obtaining reliable signal photons from ICESat-2 data in wetlands is challenging because there are many types of noise photons, such as specular return photons, after-pulse photons, and noise photons caused by sunlight. In addition, the high photon density difference between the water and hydrophytic vegetation makes it difficult to find accurate hydrophytic vegetation photons. Therefore, this research aims to propose a method to obtain high-accuracy signal photons and classify water body photons and hydrophytic vegetation photons in complex wetlands. First, we introduced the modified elevation histogram statistics vector-based (MEHSV) method to filter out noise photons caused by sunlight. The MEHSV method was developed to retain sparse canopy photons. Therefore, the MEHSV method can retain sparse hydrophytic vegetation photons. Second, peak analysis of the elevation histogram statistics removed the specular return photons and after-pulse photons caused by the water surface. Finally, the manually labeled photons and reference water surface level data were used to assess the proposed method. The filtering results showed that the F value of the proposed method achieved 0.99. Compared with other reference methods, the proposed method both preserved hydrophytic vegetation photons being misrecognized and removed all types of noise photons effectively. The water photons and hydrophytic vegetation photons were distinguished accurately. Additionally, the accuracy of water surface level (R<sup>2</sup> = 0.97, and RMSE = 0.84 m) witnessed the good performance of the proposed method.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100097"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From gaps to granularity: CRPAG-DSHAT based multi-modal deep learning framework for DEM void repair and super-resolution reconstruction in Himalayas","authors":"Sayantan Mandal, Ashis Kumar Saha","doi":"10.1016/j.ophoto.2025.100101","DOIUrl":"10.1016/j.ophoto.2025.100101","url":null,"abstract":"<div><div>Digital Elevation Models (DEMs) are essential for terrain characterization and environmental modeling, yet their utility is limited by data voids and coarse resolution, especially in complex mountainous regions of Himalayas. To address these challenges, we propose a novel dual-stage deep learning pipeline that unifies void filling and super-resolution into a cohesive framework, leveraging both topographic fidelity and spectral texture. First, the <strong>Conditional Residual Pyramid Attentional Generator (CRPAG)</strong> a hybrid model that integrates multi-scale DEM features with Sentinel-2 red band reflectance (∼665 nm) using an <strong>Improved Channel Attention Module</strong> (ICAM), <strong>Residual Pyramid Attention Block</strong> (TFG_RPAB), and a dual-encoder design. This allows CRPAG to prioritize structural fidelity (RMSE 9.1–28.9 m) while reconstructing missing terrain features (Mean Absolute Error MAE 1.9–8.1 m). This void-filled, high-resolution DEM then supervise the training of <strong>Dual-Stream Hierarchical Attention Transformer (DS-HAT)</strong>, which performs super-resolution on globally available low-resolution DEMs (ALOS PALSAR), guided by pixel-wise height attention and texture-aware mechanisms. Compared to benchmark models such as MCU-Net-EDF and conventional U-Nets, our integrated system shows improvements in elevation accuracy (RMSE ↓, P95 = 9.2 m), spatial consistency (Moran's I ↑), and structural similarity (SSIM ↑), particularly across high-curvature and spectrally ambiguous regions. Besides, Ablation studies confirm the complementary applications of topographic variables in mitigating oversmoothing and enhancing terrain realism. This dual-stage strategy not only enhances DEM fidelity but also provides a scalable framework for improving DEM quality. Through this multi-modal fusion, this work transforms topographic knowledge into computable framework, advancing DEM applicability in hydrological modeling, detection mechanisms and disaster prediction.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100101"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145009967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FeatureGS: Eigenvalue-feature optimization in 3D Gaussian Splatting for geometrically accurate and artifact-reduced reconstruction","authors":"Miriam Jäger, Markus Hillemann, Boris Jutzi","doi":"10.1016/j.ophoto.2025.100100","DOIUrl":"10.1016/j.ophoto.2025.100100","url":null,"abstract":"<div><div>3D Gaussian Splatting (3DGS) has emerged as a powerful approach for 3D scene reconstruction using 3D Gaussians. However, neither the centers nor surfaces of the Gaussians are accurately aligned to the object surface, complicating their direct use in point cloud and mesh reconstruction. Additionally, 3DGS typically produces floater artifacts, increasing the number of Gaussians and storage requirements. To address these issues, we present FeatureGS, which incorporates an additional geometric loss term based on an eigenvalue-derived 3D shape feature into the optimization process of 3DGS. The goal is to improve geometric accuracy and enhance properties of planar surfaces with reduced structural entropy in local 3D neighborhoods, typically given in man-made environments. We present four alternative formulations for the geometric loss term based on ‘planarity’ of Gaussians, as well as ‘planarity’, ‘omnivariance’, and ‘eigenentropy’ of Gaussian neighborhoods. On the small-scale DTU benchmark with man-made scenes, FeatureGS achieves a 20% improvement in geometric accuracy, suppresses floater artifacts by 90%, and reduces the number of Gaussians by 95%. FeatureGS proves to be a strong method for geometrically accurate, artifact-reduced and memory-efficient 3D scene reconstruction, enabling the direct use of Gaussian centers for geometric representation.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing beyond vegetation: A comparative occlusion analysis between Multi-View Stereo, Neural Radiance Fields and Gaussian Splatting for 3D reconstruction","authors":"Ivana Petrovska, Boris Jutzi","doi":"10.1016/j.ophoto.2025.100089","DOIUrl":"10.1016/j.ophoto.2025.100089","url":null,"abstract":"<div><div>Image-based 3D reconstruction offers realistic scene representation for applications that require accurate geometric information. Although the assumption that images are simultaneously captured, perfectly posed and noise-free simplifies the 3D reconstruction, this rarely holds in real-world settings. A real-world scene comprises multiple objects which obstruct each other and certain object parts are occluded, thus it can be challenging to generate a complete and accurate geometry. Being a part of our environment, we are particularly interested in vegetation that often obscures important structures, leading to incomplete reconstruction of the underlying features. In this contribution, we present a comparative analysis of the geometry behind vegetation occlusions reconstructed by traditional Multi-View Stereo (MVS) and radiance field methods, namely: Neural Radiance Fields (NeRFs), 3D Gaussian Splatting (3DGS) and 2D Gaussian Splatting (2DGS). Excluding certain image parts and investigating how different level of vegetation occlusions affect the geometric reconstruction, we consider Synthetic masks with different occlusion coverage of 10% (Very Sparse), 30% (Sparse), 50% (Medium), 70% (Dense) and 90% (Very Dense). To additionally demonstrate the impact of spatially consistent 3D occlusions, we use Natural masks (up to 35%) where the vegetation is stationary in the 3D scene, but relative to the view-point. Our investigations are based on real-world scenarios; one occlusion-free indoor scenario, on which we apply the Synthetic masks and one outdoor scenario, from which we derive the Natural masks. The qualitative and quantitative 3D evaluation is based on point cloud comparison against a ground truth mesh addressing accuracy and completeness. The conducted experiments and results demonstrate that although MVS shows lowest accuracy errors in both scenarios, the completeness manifests a sharp decline as the occlusion percentage increases, eventually failing under Very Dense masks. NeRFs manifest robustness in the reconstruction with highest completeness considering masks, although the accuracy proportionally decreases with increasing the occlusions. 2DGS achieves second best accuracy results outperforming NeRFs and 3DGS, indicating a consistent performance across different occlusion scenarios. Additionally, by using MVS for initialization, 3DGS and 2DGS completeness improves without significantly sacrificing the accuracy, due to the more densely reconstructed homogeneous areas. We demonstrate that radiance field methods can compete against traditional MVS, showing robust performance for a complete reconstruction under vegetation occlusions.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä
{"title":"Direct integration of ALS and MLS for real-time localization and mapping","authors":"Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä","doi":"10.1016/j.ophoto.2025.100088","DOIUrl":"10.1016/j.ophoto.2025.100088","url":null,"abstract":"<div><div>This paper presents a novel real-time fusion pipeline for integrating georeferenced airborne laser scanning (ALS) and online mobile laser scanning (MLS) data to enable accurate localization and mapping in complex natural environments. To address sensor drift caused by relative Light Detection and Ranging (lidar) and inertial measurements, occlusion affecting the Global Navigation Satellite System (GNSS) signal quality, and differences in the fields of view of the sensors, we propose a tightly coupled lidar-inertial registration system with an adaptive, robust Iterated Error-State Extended Kalman Filter (RIEKF). By leveraging ALS-derived prior maps as a global reference, our system effectively refines the MLS registration, even in challenging environments like forests. A novel coarse-to-fine initialization technique is introduced to estimate the initial transformation between the local MLS and global ALS frames using online GNSS measurements. Experimental results in forest environments demonstrate significant improvements in both absolute and relative trajectory accuracy, with relative mean localization errors as low as 0.17 m for a prior map based on dense ALS data and 0.22 m for a prior map based on sparse ALS data. We found that while GNSS does not significantly improve registration accuracy, it is essential for providing the initial transformation between the ALS and MLS frames, enabling their direct and online fusion. The proposed system predicts poses at an inertial measurement unit (IMU) rate of 400 Hz and updates the pose at the lidar frame rate of 10 Hz.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transfer learning and single-polarized SAR image preprocessing for oil spill detection","authors":"Nataliia Kussul , Yevhenii Salii , Volodymyr Kuzin , Bohdan Yailymov , Andrii Shelestov","doi":"10.1016/j.ophoto.2024.100081","DOIUrl":"10.1016/j.ophoto.2024.100081","url":null,"abstract":"<div><div>This study addresses the challenge of oil spill detection using Synthetic Aperture Radar (SAR) satellite imagery, employing deep learning techniques to improve accuracy and efficiency. We investigated the effectiveness of various neural network architectures and encoders for this task, focusing on scenarios with limited training data. The research problem centered on enhancing feature extraction from single-channel SAR data to improve oil spill detection performance.</div><div>Our methodology involved developing a novel preprocessing pipeline that converts single-channel SAR data into a three-channel RGB representation. The preprocessing technique normalizes SAR intensity values and encodes extracted features into RGB channels.</div><div>Through an experiment, we have shown that a combination of the LinkNet with an EfficientNet-B4 is superior to pairs of other well-known architectures and encoders.</div><div>Quantitative evaluation revealed a significant improvement in F1-score of 0.064 compared to traditional dB-scale preprocessing methods. Qualitative assessment on independent SAR scenes from the Mediterranean Sea demonstrated better detection capabilities, albeit with increased sensitivity to look-alike.</div><div>We conclude that our proposed preprocessing technique shows promise for enhancing automatic oil spill segmentation from SAR imagery. The study contributes to advancing oil spill detection methods, with potential implications for environmental monitoring and marine ecosystem protection.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new unified framework for supervised 3D crown segmentation (TreeisoNet) using deep neural networks across airborne, UAV-borne, and terrestrial laser scans","authors":"Zhouxin Xi, Dani Degenhardt","doi":"10.1016/j.ophoto.2025.100083","DOIUrl":"10.1016/j.ophoto.2025.100083","url":null,"abstract":"<div><div>Accurately defining and isolating 3D tree space is critical for extracting and analyzing tree inventory attributes, yet it remains a challenge due to the structural complexity and heterogeneity within natural forests. This study introduces TreeisoNet, a suite of supervised deep neural networks tailored for robust 3D tree segmentation across natural forest environments. These networks are specifically designed to identify tree locations, stem components (if available), and crown clusters, making them adaptable to varying scales of laser scanning from airborne laser scannner (ALS), terrestrial laser scanner (TLS), and unmanned aerial vehicle (UAV). Our evaluation used three benchmark datasets with manually isolated tree references, achieving mean intersection-over-union (mIoU) accuracies of 0.81 for UAV, 0.76 for TLS, and 0.59 for ALS, which are competitive with contemporary algorithms such as ForAINet, Treeiso, Mask R-CNN, and AMS3D. Noise from stem point delineation minimally impacted stem location detection but significantly affected crown clustering. Moderate manual refinement of stem points or tree centers significantly improved tree segmentation accuracies, achieving 0.85 for UAV, 0.86 for TLS, and 0.80 for ALS. The study confirms SegFormer as an effective 3D point-level classifier and an offset-based UNet as a superior segmenter, with the latter outperforming unsupervised solutions like watershed and shortest-path methods. TreeisoNet demonstrates strong adaptability in capturing invariant tree geometry features, ensuring transferability across different resolutions, sites, and sensors with minimal accuracy loss.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting and measuring fine-scale urban tree canopy loss with deep learning and remote sensing","authors":"David Pedley, Justin Morgenroth","doi":"10.1016/j.ophoto.2025.100082","DOIUrl":"10.1016/j.ophoto.2025.100082","url":null,"abstract":"<div><div>Urban trees provide a multitude of environmental and amenity benefits for city occupants yet face ongoing risk of removal due to urban pressures and the preferences of landowners. Understanding the extent and location of canopy loss is critical for the effective management of urban forests. Although city-scale assessments of urban forest canopy cover are common, the accurate identification of fine-scale canopy loss remains challenging. Evaluating change at the property scale is of particular importance given the localised benefits of urban trees and the scale at which tree removal decisions are made.</div><div>The objective of this study was to develop a method to accurately detect and quantify the city-wide loss of urban tree canopy (UTC) at the scale of individual properties using publicly available remote sensing data. The study area was the city of Christchurch, New Zealand, with the study focussed on UTC loss that occurred between 2016 and 2021. To accurately delineate the 2016 UTC, a semantic segmentation deep learning model (DeepLabv3+) was pretrained using existing UTC data and fine-tuned using high resolution aerial imagery. The output of this model was then segmented into polygons representing individual trees using the Segment Anything Model. To overcome poor alignment of aerial imagery, LiDAR point cloud data was utilised to identify changes in height between 2016 and 2021, which was overlaid across the 2016 UTC to map areas of UTC loss. The accuracy of UTC loss predictions was validated using a visual comparison of aerial imagery and LiDAR data, with UTC loss quantified for each property within the study area.</div><div>The loss detection method achieved accurate results for the property-scale identification of UTC loss, including a mean F1 score of 0.934 and a mean IOU of 0.883. Precision values were higher than recall values (0.941 compared to 0.811), which reflected a deliberately conservative approach to avoid false positive detections. Approximately 14.5% of 2016 UTC was lost by 2021, with 74.9% of the UTC loss occurring on residential land. This research provides a novel geospatial method for evaluating fine-scale city-wide tree dynamics using remote sensing data of varying type and quality with imperfect alignment. This creates the opportunity for detailed evaluation of the drivers of UTC loss on individual properties to enable better management of existing urban forests.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zuoya Liu , Harri Kaartinen , Teemu Hakala , Heikki Hyyti , Juha Hyyppä , Antero Kukko , Ruizhi Chen
{"title":"Performance analysis of ultra-wideband positioning for measuring tree positions in boreal forest plots","authors":"Zuoya Liu , Harri Kaartinen , Teemu Hakala , Heikki Hyyti , Juha Hyyppä , Antero Kukko , Ruizhi Chen","doi":"10.1016/j.ophoto.2025.100087","DOIUrl":"10.1016/j.ophoto.2025.100087","url":null,"abstract":"<div><div>Accurate individual tree locations enable efficient forest inventory management and automation, support precise forest surveys, management decisions and future individual-tree harvesting plans. In this paper, we compared and analyzed in detail the performance of an ultra-wideband (UWB) data-driven method for mapping individual tree locations in boreal forest sample plots of varying complexity. Twelve forest sample plots selected from varying forest-stand conditions representing different developing stages, stem densities and abundance of sub canopy growth in boreal forests were tested. These plots were classified into three categories (“Easy”, “Medium” and “Difficult”) according to these varying stand conditions. The experimental results show that UWB data-driven method is able to map individual tree locations accurately with total root-mean-squared-errors (RMSEs) of 0.17 m, 0.2 m, and 0.26 m for “Easy”, “Medium” and “Difficult” forest plots, respectively, providing a strong reference for forest surveys.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}