{"title":"Spatio-temporal compliance monitoring in land administration using an earth observation–enabled LADM framework","authors":"Okan Yılmaz, Mehmet Alkan","doi":"10.1016/j.ophoto.2026.100124","DOIUrl":"10.1016/j.ophoto.2026.100124","url":null,"abstract":"<div><div>Land Administration Systems (LAS) have traditionally been designed as systems where legal records are updated through formal transactions, yet physical reality monitoring remains largely reactive. While these systems effectively record legal rights (tenure), land use restrictions (planning), and development rights (permits), the physical reality of the world is inherently dynamic. Consequently, the monitoring of conformity between de facto changes on the land (e.g., unauthorized construction, land use discrepancies) and the legal functions defined within LAS must evolve to keep pace with this dynamism. In rapidly developing nations such as Türkiye, verifying the compliance of physical development with spatial plans necessitates continuous spatio-tempoaral monitoring. This necessity also extends to the detection of cadastral boundary encroachments. While the archive of the earth surface provided by aerial photography and satellite imagery enables the 2D and 3D comparison of legal and physical status, the temporal archival of these records serves the 4th dimension of land administration. This study aims to design the LADM Country Profile for Türkiye regarding land administration functions and to facilitate the compliance checking of legal-physical status using remote sensing data. The functionality of the proposed model was tested at the instance level using temporal satellite imagery derived from Google Earth Engine (GEE). The results obtained from various case studies demonstrate that the proposed Türkiye Country Profile offers comprehensive solutions for sustainable land-use governance.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"20 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147859407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christine Atuhaire , Ronald Semyalo , Lydia Mazzi Kayondo , Joyce Nakatumba Nabende , Lydia Nakimbugwe , Umar Katongole , Anthony Gidudu
{"title":"Diagnostic spectral features for plastic litter detection in freshwater environments","authors":"Christine Atuhaire , Ronald Semyalo , Lydia Mazzi Kayondo , Joyce Nakatumba Nabende , Lydia Nakimbugwe , Umar Katongole , Anthony Gidudu","doi":"10.1016/j.ophoto.2026.100126","DOIUrl":"10.1016/j.ophoto.2026.100126","url":null,"abstract":"<div><div>Developing spectral-based algorithms for detecting plastics in natural environments requires an understanding of the spectral properties of plastic materials. However, spectral characterization of plastics and identification of diagnostic spectral features for plastic detection in natural freshwater environments remain unexplored. Unlike earlier studies that identified diagnostic spectral absorption features of plastics based on a single dominant background (either water-only or sand-only), we sought to identify features that account for the coexistence of multiple dominant background materials within the field of view. Spectral measurements were performed on plastic samples collected from the shores of Lake Victoria in Uganda using an SR-3501 Spectroradiometer. Three background surfaces (water, sand and vegetation) representative of the natural environment were selected and used as backgrounds for conducting spectral measurements. Several absorption features in plastic spectra, which aligned with earlier studies, were detected. However, the strength of these absorption features varied significantly across background surfaces, with coefficients of variation exceeding 20% for most features. Diagnostic spectral features for plastics were therefore identified based on freshwater shore zones. Absorption features centered at 1086 nm, 1660 nm, 1681 nm, 1714 nm, 1729 nm, 2054 nm, 2132 nm and 2156 nm were identified as diagnostic of plastic materials. Particularly, the 1729 nm feature was detected in more plastic samples than any other, making it particularly useful for developing plastic detection algorithms. While previous studies identified plastic absorption features in the ⁓930 nm-978 nm and ⁓1140 nm-1220 nm spectral ranges, this study revealed that, within these ranges, plastics and backgrounds have similar absorption characteristics and therefore cannot be spectrally differentiated using these absorption features.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"20 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147859409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karlheinz Gutjahr, Ana Gregorac, Janik Deutscher, Roland Perko
{"title":"DSM generation from VHR spaceborne imagery using deep learning-based land-cover-aware GCP extraction","authors":"Karlheinz Gutjahr, Ana Gregorac, Janik Deutscher, Roland Perko","doi":"10.1016/j.ophoto.2026.100123","DOIUrl":"10.1016/j.ophoto.2026.100123","url":null,"abstract":"<div><div>Accurate digital surface model (DSM) generation from very high-resolution satellite stereo imagery remains constrained by the labor-intensive requirement for manual ground control point (GCP) measurement. This work presents a fully automated deep learning framework that eliminates this bottleneck by combining SuperPoint feature detection with LightGlue matching to generate both tie-points (TP) between stereo image pairs and GCPs via correspondence with reference ortho-images. Both rely on the transformer architecture, on self- and cross-attention, and outperform traditional methods like SIFT, SURF, or ORB in terms of accuracy and speed. A critical innovation integrates land-cover-aware filtering to mitigate systematic height errors arising from geometric misalignments between reference ortho-images and digital elevation models, particularly in urban environments where inaccurate ortho-rectification produce unreliable GCP heights. The land cover itself is classified with a CCN-based architecture, specifically, a U-Net derivation with categorical focal loss and custom augmentation strategy in the training phase. Applied to a tri-stereo Pléiades dataset over Eisenstadt, Austria, the method automatically generates around 2000 filtered GCPs that achieve sub-meter planar accuracy (0.5m RMS in easting/northing) and approximately 2m vertical accuracy after block adjustment of rational polynomial coefficient sensor models. The proposed land-cover-aware method yields a lower RMS error in height of 0.55m w.r.t. the automated method. By replacing manual GCP measurement with deep learning-based automation and land-cover-aware quality control, this work completes the end-to-end workflow for rapid, high-precision DSM production across diverse terrain types, with direct applicability to urban mapping, change detection, and continental-scale monitoring missions.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"20 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147631650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karsten Jacobsen , Gürcan Büyüksalih , Cem Gazioglu
{"title":"Accuracy analysis and improvement of worldwide elevation models by ICESat-2 data","authors":"Karsten Jacobsen , Gürcan Büyüksalih , Cem Gazioglu","doi":"10.1016/j.ophoto.2026.100115","DOIUrl":"10.1016/j.ophoto.2026.100115","url":null,"abstract":"<div><div>Global or near global elevation models than TanDEM-X Edited Digital Elevation Model (EDEM), ALOS World 3D (AW3D), SRTM and ASTER GDEM (GDEM) show some systematic errors which can be determined and respected by accurate reference Digital Elevation Models (DEMs). Of course, if accurate DEMs are available, they can be used instead, but with ICESat-2 satellite LiDAR profiles accurate reference data are available that can be used to improve the freely available DEMs worldwide. The density of the ICESat-2 LiDAR ground points for the ATL08-data is limited to approximately 100 m in orbit direction and even just up to over 1 km between the orbits, but this is sufficient for improving the free DEMs. The correction can involve a simple elevation shift, a model tilt, or a correction by higher-order systematic errors.</div><div>A comparison with reference data from aerial survey revealed a bias in the range of ∼2m, which could also be due the aerial photogrammetric reference data. A comparison of the ICESat-2 data with the freely available AW3D and EDEM yielded satisfactory results, after improvement by bias and model tilt correction. The original RMSZ of EDEM compared to the reference DTM could be reduced from 2.23m to 1.30 m by height model tilt and shift determined by ICESat-2 and for AW3D from originally 1.81 m–1.64 m. In this data set, the influence of systematic errors is limited, it reduced the deformations slightly, but the accuracy numbers only had a negligible improvement. SRTM and ASTER GDEM should no longer be used due to their significantly lower accuracy compared to EDEM and AW3D.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100115"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geethanjali Anjanappa, Sander Oude Elberink, Abhisek Maiti, Yaping Lin, George Vosselman
{"title":"Map2ImLas: Large-scale 2D-3D airborne dataset with map-based annotations","authors":"Geethanjali Anjanappa, Sander Oude Elberink, Abhisek Maiti, Yaping Lin, George Vosselman","doi":"10.1016/j.ophoto.2025.100112","DOIUrl":"10.1016/j.ophoto.2025.100112","url":null,"abstract":"<div><div>Airborne data are commonly used in mapping, urban planning, and environmental monitoring. However, deep learning (DL) for these tasks is often limited by the lack of large-scale multimodal datasets that represent diverse landscapes and detailed classes. In this work, we present <em>Map2ImLas</em>, a large-scale dataset created using topographic maps and high-resolution airborne data from the Netherlands. The dataset includes 2413 spatially matching tiles of maps, 2D orthoimages, digital surface models, and 3D point clouds, covering approximately 217 km<sup>2</sup> across urban, suburban, industrial, rural, and forested areas. Map2ImLas provides per-pixel and per-point annotations for 20 different classes, applicable to both 2D and 3D semantic segmentation, as well as vector polygons for object delineation tasks. The proposed labeling process is fully automated, dynamically aligning all data sources to generate structured and consistent annotations. The pipeline is scalable and can be adapted to other regions in the Netherlands with minimal changes. We also introduce a DL-based workflow for labeling trees in 3D point clouds, using map data as semantic priors. To support DL applications, we provide a two-fold data split with non-overlapping training, validation, and test tiles. The dataset is benchmarked using several state-of-the-art 2D and 3D segmentation models to demonstrate its usability for semantic segmentation tasks. While the present evaluation focuses on segmentation, the structured vector annotations also enable future research on boundary extraction and object delineation. Overall, Map2ImLas reduces the need for manual annotation by reusing existing map data and supports geospatial AI in large-scale mapping and semantic labeling for multimodal data.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Jesse Muhojoki , Petri Manninen , Teemu Hakala , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä
{"title":"Direct 3D mapping with a 2D LiDAR using sparse reference maps","authors":"Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Jesse Muhojoki , Petri Manninen , Teemu Hakala , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä","doi":"10.1016/j.ophoto.2025.100109","DOIUrl":"10.1016/j.ophoto.2025.100109","url":null,"abstract":"<div><div>Precise 3D mapping is crucial for a wide range of geospatial applications, including forest monitoring, infrastructure assessment, and autonomous navigation. While 2D Light Detection and Ranging (LiDAR) sensors offer superior range accuracy and higher point density compared to many 3D LiDARs, their limited sensing geometry makes full 3D reconstruction challenging. In this paper, we address these limitations and achieve robust 3D mapping by proposing a direct method for integrating 2D LiDAR with a 6 Degrees of Freedom (DoF) trajectory and sparse 3D reference maps derived from mobile laser scanning (MLS) or airborne laser scanning (ALS). Our method begins with an initial 6 DoF trajectory and performs batch optimisation by jointly co-registering buffered 2D LiDAR scans to a 3D reference map, enhancing both trajectory accuracy and mapping completeness without relying on 2D scans’ overlap or segmentation. We also introduce a novel, targetless extrinsic calibration approach between 2D LiDAR, 3D LiDAR, and a Global Navigation Satellite System–Inertial Navigation System (GNSS–INS) system that does not rely on overlapping sensor Field of View (FOV). We validate our approach in forest road environments using sparse ALS or MLS reference maps and initial poses from GNSS–INS or 3D LiDAR-inertial odometry. Experiments in forest roads achieved mean localisation accuracies of 0.1 m (using 3D MLS initialisation) and 0.16 m (using GNSS–INS initialisation), reducing drift by up to nine times in translation and six times in rotation. The extrinsic calibration method converges even with initial misalignments of up to 40° in rotation and 3 m in translation. The proposed framework enables multi-platform, multi-temporal data fusion, offering a practical solution for field deployment and map correction tasks.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100109"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Achieving higher-resolution visible/near-infrared hyperspectral surface composition on Mars through cross-instrument CRISM-CaSSIS pansharpening","authors":"Adriano Tullo , Beatrice Baschetti , Cristina Re , Sylvain Douté , Livio Leonardo Tornabene , Cristian Carli , Angelo Zinzi , Vidhya Ganesh Rangarajan , Silvia Bertoli , Riccardo La Grassa , Giovanni Munaretto , Natalia Amanda Vergara Sassarini , Matteo Massironi , Gianrico Filacchione , Gabriele Cremonese , Nicolas Thomas","doi":"10.1016/j.ophoto.2026.100121","DOIUrl":"10.1016/j.ophoto.2026.100121","url":null,"abstract":"<div><div>Spectrometry is one of the main tools used in planetary mineralogical characterisation, especially on Mars, driven by past and current missions. Despite recent technical advances, physical limitations restrict the spatial resolution capability of hyperspectral sensors in orbit. While CRISM's ground resolution (down to 18 m/px) is a significant advance, it remains insufficient for detailed, meter-scale studies. Other instruments (e.g., MRO HiRISE, CTX, TGO CaSSIS) offer superior spatial resolution but only provide limited panchromatic or multispectral data. The study explores the application of modern pansharpening algorithms for the enhancement of the spatial resolution of CRISM hyperspectral cubes based on CaSSIS images. For validation of this approach, the test dataset was brought to a spatial resolution of up to 4.5 m/px, achieving an increase in spatial resolution of up to 8 times the original. The analyses conducted show that despite the significant increase in resolution, the spectral quality of the data remains virtually unchanged, maintaining its usability and potentially implementing it for compositional and mineralogical interpretation. Both the alignment methods and the two pansharpening methods used, GSA and MTF-GLP, have been adapted for use with georeferenced data and are part of the PANCO open-source script suite. Although tested on CRISM, the developed tools can also be applied to a wide range of other remote sensing instruments for both planetary and terrestrial observations.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147421373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative deep learning models for cloud removal in satellite imagery: A comparative review of GANs and diffusion methods","authors":"Shanika Edirisinghe, Bianca Schoen-Phelan, Svetlana Hensman","doi":"10.1016/j.ophoto.2025.100110","DOIUrl":"10.1016/j.ophoto.2025.100110","url":null,"abstract":"<div><div>Satellite imagery provides essential geospatial data to support various remote sensing applications, including environmental monitoring, disaster management, urban planning, and land utilization studies. However, cloud cover often obstructs the clarity and reliability of satellite images, reducing their usefulness. With advances in deep learning, generative models — particularly Generative Adversarial Networks (GANs) and denoising diffusion models — have emerged as promising solutions for cloud removal in satellite imagery. This review systematically evaluates GAN-based and diffusion-based methods, comparing their strengths, limitations, and performance across diverse geographic and cloud conditions. The analysis shows that GANs generate visually realistic outputs through adversarial training, while diffusion models offer superior spatial and structural fidelity due to iterative noise reduction. Integrating auxiliary data such as Synthetic Aperture Radar (SAR) imagery further enhances cloud removal accuracy. This review highlights current challenges and identifies research gaps to support future innovation in satellite image restoration, particularly in cloud removal and generative deep learning for remote sensing.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100110"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145798481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo La Grassa , Cristina Re , Adriano Tullo , Ignazio Gallo , Gabriele Cremonese
{"title":"Transformer-driven monocular high-resolution DTM generation on mars via multimodal integration of CaSSIS imagery and MOLA altimetry","authors":"Riccardo La Grassa , Cristina Re , Adriano Tullo , Ignazio Gallo , Gabriele Cremonese","doi":"10.1016/j.ophoto.2026.100118","DOIUrl":"10.1016/j.ophoto.2026.100118","url":null,"abstract":"<div><div>High-resolution digital terrain models (DTMs) are essential for geomorphological analysis of planetary surfaces, yet Martian topography remains constrained by the coarse resolution of global datasets such as MOLA and by the limited coverage of stereo-derived products. In this work, we present SurfPlaNet, a deep learning framework based on a Dense Residual Connected Transformer Architecture (DRCT) that reconstructs intermediate-resolution DTMs from single-view (monocular) orbital observations. Unlike traditional photogrammetric pipelines, SurfPlaNet does not require stereo geometry, but instead fuses panchromatic CaSSIS imagery with HRSC–MOLA coarse elevation data to infer detailed topography. The architecture integrates Swin Transformer blocks with an affine calibration head and is trained end-to-end using terrain-aware loss functions combining masked pixel losses, gradient consistency, and total variation. Experimental evaluation against CaSSIS stereo-derived DTMs shows that SurfPlaNet achieves average elevation errors on the order of 61 meters. While this accuracy remains lower than stereo-based methods, the model is capable of recovering geomorphological features such as crater rims, ridges, and localized slope variations that are absent in HRSC-MOLA inputs. Crucially, by leveraging MOLA as a global elevation prior, SurfPlaNet produces metrically calibrated predictions that can be generalized across the Martian surface, including areas not covered by CaSSIS stereo. This demonstrates the potential of monocular transformer-based approaches to complement stereo pipelines, enabling broader coverage of Mars with consistent, scalable topographic reconstructions.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100118"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147421370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas A. Lake , Brit B. Laginhas , Brennen T. Farrell, Ross K. Meentemeyer, Chris M. Jones
{"title":"Continental-scale computer vision models reveal generalizable patterns and pitfalls for urban tree inventories with street-view images","authors":"Thomas A. Lake , Brit B. Laginhas , Brennen T. Farrell, Ross K. Meentemeyer, Chris M. Jones","doi":"10.1016/j.ophoto.2026.100122","DOIUrl":"10.1016/j.ophoto.2026.100122","url":null,"abstract":"<div><div>Accurate, up-to-date catalogs of urban tree populations are crucial for quantifying ecosystem services and enhancing the quality of life in cities. However, mapping tree species cost-effectively remains challenging. In response, remote sensing researchers are developing general-purpose tools to survey plant populations across broad spatial scales. In this study, we developed computer vision models to detect, classify, and map 100 tree genera across 23 cities in North America using Google Street View (GSV) and iNaturalist images. We validated our predictions in independent portions of each city. We then compared our predictions to existing street tree records to evaluate the spatial context of errors using generalized linear mixed-effects models. Overall, the computer vision models identified most ground-truthed street trees (67.1%). Performance was more variable among genera (50.9% ± 23.0%) than cities (67.4% ± 9.3%), and improved with denser street-view coverage, simpler stand structure, and greater training representation, particularly from the focal city. We found that genus classification performed better in continental cities with lower relative diversity, and that seasonal changes in the appearance of trees provided visual cues that moderate classification rates. Using widely available street-level imagery is a generalizable and promising avenue for mapping tree distributions across urban environments.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147420153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}