Roderik Lindenbergh , Katharina Anders , Mariana Campos , Daniel Czerwonka-Schröder , Bernhard Höfle , Mieke Kuschnerus , Eetu Puttonen , Rainer Prinz , Martin Rutzinger , Annelies Voordendag , Sander Vos
{"title":"Permanent terrestrial laser scanning for near-continuous environmental observations: Systems, methods, challenges and applications","authors":"Roderik Lindenbergh , Katharina Anders , Mariana Campos , Daniel Czerwonka-Schröder , Bernhard Höfle , Mieke Kuschnerus , Eetu Puttonen , Rainer Prinz , Martin Rutzinger , Annelies Voordendag , Sander Vos","doi":"10.1016/j.ophoto.2025.100094","DOIUrl":"10.1016/j.ophoto.2025.100094","url":null,"abstract":"<div><div>Many topographic scenes exhibit complex dynamic behavior that is difficult to map, quantify, predict and understand. A terrestrial laser scanner fixed on a permanent position can be used to monitor such scenes in an automated way with centimeter to decimeter quality at ranges of up to several kilometers. Laser scanners are active sensors, and are therefore able to continue operation during night. Their independence from texture conditions ensures that in principle they provide stable range measurements for varying surface conditions. Recent years have seen a strong increase in the employment of such systems for different scientific applications in geosciences, environmental and ecological sciences, including forestry, glaciology, and geomorphology. At the same time, this employment resulted in a new type of 4D topographic data sets (3D point clouds + time) with a significant temporal dimension, as systems are now able to acquire thousands of consecutive epochs in a row. Extracting information from these 4D data sets turns out to be challenging, first, because of insufficient knowledge on error budget and correlations, and, second, because of lack of algorithms, benchmarks, and best-practice workflows. This paper provides an overview of different 4D systems for near-continuous laser scanning, and discusses systematic challenges including instability of the sensor system, meteorological and atmospheric influences, and data alignment, before discussing recently developed methods and scientific software for extracting and parameterizing changes from 4D topographic data sets, in connection to the different applications.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144604308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the role of training data origin for country-scale cropland mapping in data-scarce regions: A case study of Nigeria","authors":"Joaquin Gajardo , Michele Volpi , Daniel Onwude , Thijs Defraeye","doi":"10.1016/j.ophoto.2025.100091","DOIUrl":"10.1016/j.ophoto.2025.100091","url":null,"abstract":"<div><div>Cropland maps are essential for remote sensing-based agricultural monitoring, providing timely insights about agricultural development without requiring extensive field surveys. While machine learning enables large-scale mapping, it relies on geo-referenced ground-truth data, which is time-consuming to collect, motivating efforts to integrate global datasets for mapping in data-scarce regions. A key challenge is understanding how the quantity, quality, and proximity of the training data to the target region influences model performance in regions with limited local ground truth. To address this, we evaluate the impact of combining global and local datasets for cropland mapping in Nigeria at 10 m resolution. We manually labelled 1,827 data points evenly distributed across Nigeria and leveraged the crowd-sourced Geowiki dataset, evaluating three subsets of it: Nigeria, Nigeria + neighbouring countries, and worldwide. Using Google Earth Engine (GEE), we extracted multi-source time series data from Sentinel-1, Sentinel-2, ERA5 climate, and a digital elevation model (DEM) and compared Random Forest (RF) classifiers with Long Short-Term Memory (LSTM) networks, including a lightweight multi-task learning variant (multi-headed LSTM), previously applied to cropland mapping in other regions. Our findings highlight the importance of local training data, which consistently improved performance, with accuracy gains up to 0.246 (RF) and 0.178 (LSTM). Models trained on Nigeria-only or regional datasets outperformed those trained on global data, except for the multi-headed LSTM, which uniquely benefited from global samples when local data was unavailable. A sensitivity analysis revealed that Sentinel-1, climate, and topographic data were particularly important, as their removal reduced accuracy by up to 0.154 and F1-score by 0.593. Handling class imbalance was also critical, with weighted loss functions improving accuracy by up to 0.071 for the single-headed LSTM. Our best-performing model, a single-headed LSTM trained on Nigeria-only data, achieved an F1-score of 0.814 and accuracy of 0.842, performing competitively with the best global land cover product and showing strong recall performance, a metric highly-relevant for food security applications. These results underscore the value of regionally focused training data, proper class imbalance handling, and multi-modal feature integration for improving cropland mapping in data-scarce regions. We release our data, source code, output maps, and an interactive GEE web application to facilitate further research.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100091"},"PeriodicalIF":0.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Schindler , Ziyi Sun , Bing Xue , Mengjie Zhang
{"title":"Efficient tree mapping through deep distance transform (DDT) learning","authors":"Jan Schindler , Ziyi Sun , Bing Xue , Mengjie Zhang","doi":"10.1016/j.ophoto.2025.100095","DOIUrl":"10.1016/j.ophoto.2025.100095","url":null,"abstract":"<div><div>Trees provide essential ecosystem services in urban areas, rural landscapes and forests. Individual tree information can inform forest and risk modelling, health studies and decision-making in public and non-governmental sectors. The increase in available remote sensing data and advances in automated object detection makes it feasible to map trees over large areas in unprecedented detail. Deep learning-based instance segmentation methods have thereby become the state-of-the-art in tree crown delineations tasks from aerial ortho-photography. Many of these methods are based on one- and two-stage detector frameworks such as Mask-RCNN and YOLO, which were developed focussing on speed and accuracy against common benchmark datasets. Another class of object detectors is based on encoder-decoder networks such as UNet which offer easy integration into existing workflows and high accuracy even in complex forest scenes in regional and national tree studies. While previous methods had to combine multi-model and multi-task outputs to create decision surfaces, we developed an efficient UNet-based modelling approach which focusses solely on learning the distance transforms of tree objects as cost surface for watershed segmentation. Our algorithm achieves superior instance segmentation across native forest, rural and urban environments in Aotearoa New Zealand, with an overall F1 score of 0.53 — 0.18 for small, 0.45 for medium and 0.67 for large crowns — surpassing previous approaches while decreasing modelling complexity, enabling fast and large-scale tree mapping.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100095"},"PeriodicalIF":0.0,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144595538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Haibt , Felix Reize , Helmut Brückner , Jörg W.E. Fassbinder , Margarete van Ess
{"title":"Virtual replication of sediment cores for geoarchaeological research in Uruk-Warka (Iraq)","authors":"Max Haibt , Felix Reize , Helmut Brückner , Jörg W.E. Fassbinder , Margarete van Ess","doi":"10.1016/j.ophoto.2025.100093","DOIUrl":"10.1016/j.ophoto.2025.100093","url":null,"abstract":"<div><div>This study presents a novel methodology for the production of high-detail, georeferenced virtual replicas of sediment cores extracted using vibracoring, a widely used technique for subsurface investigations in geoscientific research. In a case study conducted around the ancient city of Uruk in southern Iraq, 150 meters of sediment cores from 25 locations were documented. A specialized photogrammetric technique was developed to rapidly capture the visual characteristics of the stratified sediments before sampling and reuse. Cross-polarization was applied to normalize the resulting textures for enhanced sedimentological analysis. An automated processing pipeline generated georeferenced 3D models with high-detail textures, which were integrated into the UAV-based landscape model of the Uruk-VR digital twin. This comprehensive integration of surface and subsurface data offers a foundation for three-dimensional spatial analysis of stratigraphy, facilitating the reconstruction of ancient canal systems and landscape evolution of one of the oldest cities of humankind.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100093"},"PeriodicalIF":0.0,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144510710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julian R. Rice , G. Andrew Fricker , Jonathan Ventura
{"title":"An end-to-end deep learning solution for automated LiDAR tree detection in the urban environment","authors":"Julian R. Rice , G. Andrew Fricker , Jonathan Ventura","doi":"10.1016/j.ophoto.2025.100092","DOIUrl":"10.1016/j.ophoto.2025.100092","url":null,"abstract":"<div><div>Cataloging and classifying trees in the urban environment is a crucial step in urban and environmental planning; however, manual collection and maintenance of this data is expensive and time-consuming. Although algorithmic approaches that rely on remote sensing data have been developed for tree detection in forests, they generally struggle in the more varied urban environment. This work proposes a novel end-to-end deep learning method for the detection of trees in the urban environment from remote sensing data. Specifically, we develop and train a novel PointNet-based neural network architecture to predict tree locations directly from LiDAR data augmented with multi-spectral imagery. We compare this model to a number of high-performing baselines on a large and varied dataset in the Southern California region, and find that our method outperforms all baselines in terms of tree detection ability (75.5% F-score) and positional accuracy (2.28 meter root mean squared error), while being highly efficient. We then analyze and compare the sources of errors, and how these reveal the strengths and weaknesses of each approach. Our results highlight the importance of fusing spectral and structural information for remote sensing tasks in complex urban environments.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100092"},"PeriodicalIF":0.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bradley J. Koskowich , Michael J. Starek , Scott A. King
{"title":"The potential & limitations of monoplotting in cross-view geo-localization conditions","authors":"Bradley J. Koskowich , Michael J. Starek , Scott A. King","doi":"10.1016/j.ophoto.2025.100090","DOIUrl":"10.1016/j.ophoto.2025.100090","url":null,"abstract":"<div><div>Cross-view geolocalization (CVGL) describes the general problem of determining a correlation between terrestrial and nadir oriented imagery. Classical keypoint matching methods find the extreme pose transitions between cameras present in a CVGL configuration challenging to operate in, while deep neural networks demonstrate superb capacity in this area. Traditional photogrammetry methods like structure-from-motion (SfM) or simultaneous localization and mapping (SLAM) can technically accomplish CVGL, but require a sufficiently dense collection of camera views in order to recover camera pose. This research proposes an alternative CVGL solution, a series of algorithmic operations which can completely automate the calculation of target camera pose via a less common photogrammetry method known as monoplotting, also called single camera resectioning. Monoplotting only requires three inputs, which are a target terrestrial camera image, a nadir-oriented image, and an underlying digital surface model. 2D-3D point correspondences are derived from the inputs to optimize for the target terrestrial camera pose. The proposed method applies affine keypointing, pixel color quantization, and keypoint neighbor triangulation to codify explicit relationships used to augment keypoint matching operations done in a CVGL context. These matching results are used to achieve better initial 2D-3D point correlations from monoplotting image pairs, resulting in lower error for single camera resectioning. To gauge the effectiveness of the proposed method, this proposed methodology is applied to urban, suburban, and natural environment datasets. This proposed methodology demonstrates an average 42x improvement in feature matching between CVGL image pairs, which improves on inconsistent baseline methodology by reducing translation errors between 50%–75%.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144523812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing beyond vegetation: A comparative occlusion analysis between Multi-View Stereo, Neural Radiance Fields and Gaussian Splatting for 3D reconstruction","authors":"Ivana Petrovska, Boris Jutzi","doi":"10.1016/j.ophoto.2025.100089","DOIUrl":"10.1016/j.ophoto.2025.100089","url":null,"abstract":"<div><div>Image-based 3D reconstruction offers realistic scene representation for applications that require accurate geometric information. Although the assumption that images are simultaneously captured, perfectly posed and noise-free simplifies the 3D reconstruction, this rarely holds in real-world settings. A real-world scene comprises multiple objects which obstruct each other and certain object parts are occluded, thus it can be challenging to generate a complete and accurate geometry. Being a part of our environment, we are particularly interested in vegetation that often obscures important structures, leading to incomplete reconstruction of the underlying features. In this contribution, we present a comparative analysis of the geometry behind vegetation occlusions reconstructed by traditional Multi-View Stereo (MVS) and radiance field methods, namely: Neural Radiance Fields (NeRFs), 3D Gaussian Splatting (3DGS) and 2D Gaussian Splatting (2DGS). Excluding certain image parts and investigating how different level of vegetation occlusions affect the geometric reconstruction, we consider Synthetic masks with different occlusion coverage of 10% (Very Sparse), 30% (Sparse), 50% (Medium), 70% (Dense) and 90% (Very Dense). To additionally demonstrate the impact of spatially consistent 3D occlusions, we use Natural masks (up to 35%) where the vegetation is stationary in the 3D scene, but relative to the view-point. Our investigations are based on real-world scenarios; one occlusion-free indoor scenario, on which we apply the Synthetic masks and one outdoor scenario, from which we derive the Natural masks. The qualitative and quantitative 3D evaluation is based on point cloud comparison against a ground truth mesh addressing accuracy and completeness. The conducted experiments and results demonstrate that although MVS shows lowest accuracy errors in both scenarios, the completeness manifests a sharp decline as the occlusion percentage increases, eventually failing under Very Dense masks. NeRFs manifest robustness in the reconstruction with highest completeness considering masks, although the accuracy proportionally decreases with increasing the occlusions. 2DGS achieves second best accuracy results outperforming NeRFs and 3DGS, indicating a consistent performance across different occlusion scenarios. Additionally, by using MVS for initialization, 3DGS and 2DGS completeness improves without significantly sacrificing the accuracy, due to the more densely reconstructed homogeneous areas. We demonstrate that radiance field methods can compete against traditional MVS, showing robust performance for a complete reconstruction under vegetation occlusions.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä
{"title":"Direct integration of ALS and MLS for real-time localization and mapping","authors":"Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä","doi":"10.1016/j.ophoto.2025.100088","DOIUrl":"10.1016/j.ophoto.2025.100088","url":null,"abstract":"<div><div>This paper presents a novel real-time fusion pipeline for integrating georeferenced airborne laser scanning (ALS) and online mobile laser scanning (MLS) data to enable accurate localization and mapping in complex natural environments. To address sensor drift caused by relative Light Detection and Ranging (lidar) and inertial measurements, occlusion affecting the Global Navigation Satellite System (GNSS) signal quality, and differences in the fields of view of the sensors, we propose a tightly coupled lidar-inertial registration system with an adaptive, robust Iterated Error-State Extended Kalman Filter (RIEKF). By leveraging ALS-derived prior maps as a global reference, our system effectively refines the MLS registration, even in challenging environments like forests. A novel coarse-to-fine initialization technique is introduced to estimate the initial transformation between the local MLS and global ALS frames using online GNSS measurements. Experimental results in forest environments demonstrate significant improvements in both absolute and relative trajectory accuracy, with relative mean localization errors as low as 0.17 m for a prior map based on dense ALS data and 0.22 m for a prior map based on sparse ALS data. We found that while GNSS does not significantly improve registration accuracy, it is essential for providing the initial transformation between the ALS and MLS frames, enabling their direct and online fusion. The proposed system predicts poses at an inertial measurement unit (IMU) rate of 400 Hz and updates the pose at the lidar frame rate of 10 Hz.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transfer learning and single-polarized SAR image preprocessing for oil spill detection","authors":"Nataliia Kussul , Yevhenii Salii , Volodymyr Kuzin , Bohdan Yailymov , Andrii Shelestov","doi":"10.1016/j.ophoto.2024.100081","DOIUrl":"10.1016/j.ophoto.2024.100081","url":null,"abstract":"<div><div>This study addresses the challenge of oil spill detection using Synthetic Aperture Radar (SAR) satellite imagery, employing deep learning techniques to improve accuracy and efficiency. We investigated the effectiveness of various neural network architectures and encoders for this task, focusing on scenarios with limited training data. The research problem centered on enhancing feature extraction from single-channel SAR data to improve oil spill detection performance.</div><div>Our methodology involved developing a novel preprocessing pipeline that converts single-channel SAR data into a three-channel RGB representation. The preprocessing technique normalizes SAR intensity values and encodes extracted features into RGB channels.</div><div>Through an experiment, we have shown that a combination of the LinkNet with an EfficientNet-B4 is superior to pairs of other well-known architectures and encoders.</div><div>Quantitative evaluation revealed a significant improvement in F1-score of 0.064 compared to traditional dB-scale preprocessing methods. Qualitative assessment on independent SAR scenes from the Mediterranean Sea demonstrated better detection capabilities, albeit with increased sensitivity to look-alike.</div><div>We conclude that our proposed preprocessing technique shows promise for enhancing automatic oil spill segmentation from SAR imagery. The study contributes to advancing oil spill detection methods, with potential implications for environmental monitoring and marine ecosystem protection.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new unified framework for supervised 3D crown segmentation (TreeisoNet) using deep neural networks across airborne, UAV-borne, and terrestrial laser scans","authors":"Zhouxin Xi, Dani Degenhardt","doi":"10.1016/j.ophoto.2025.100083","DOIUrl":"10.1016/j.ophoto.2025.100083","url":null,"abstract":"<div><div>Accurately defining and isolating 3D tree space is critical for extracting and analyzing tree inventory attributes, yet it remains a challenge due to the structural complexity and heterogeneity within natural forests. This study introduces TreeisoNet, a suite of supervised deep neural networks tailored for robust 3D tree segmentation across natural forest environments. These networks are specifically designed to identify tree locations, stem components (if available), and crown clusters, making them adaptable to varying scales of laser scanning from airborne laser scannner (ALS), terrestrial laser scanner (TLS), and unmanned aerial vehicle (UAV). Our evaluation used three benchmark datasets with manually isolated tree references, achieving mean intersection-over-union (mIoU) accuracies of 0.81 for UAV, 0.76 for TLS, and 0.59 for ALS, which are competitive with contemporary algorithms such as ForAINet, Treeiso, Mask R-CNN, and AMS3D. Noise from stem point delineation minimally impacted stem location detection but significantly affected crown clustering. Moderate manual refinement of stem points or tree centers significantly improved tree segmentation accuracies, achieving 0.85 for UAV, 0.86 for TLS, and 0.80 for ALS. The study confirms SegFormer as an effective 3D point-level classifier and an offset-based UNet as a superior segmenter, with the latter outperforming unsupervised solutions like watershed and shortest-path methods. TreeisoNet demonstrates strong adaptability in capturing invariant tree geometry features, ensuring transferability across different resolutions, sites, and sensors with minimal accuracy loss.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}