Max Haibt , Felix Reize , Helmut Brückner , Jörg W.E. Fassbinder , Margarete van Ess
{"title":"Virtual replication of sediment cores for geoarchaeological research in Uruk-Warka (Iraq)","authors":"Max Haibt , Felix Reize , Helmut Brückner , Jörg W.E. Fassbinder , Margarete van Ess","doi":"10.1016/j.ophoto.2025.100093","DOIUrl":"10.1016/j.ophoto.2025.100093","url":null,"abstract":"<div><div>This study presents a novel methodology for the production of high-detail, georeferenced virtual replicas of sediment cores extracted using vibracoring, a widely used technique for subsurface investigations in geoscientific research. In a case study conducted around the ancient city of Uruk in southern Iraq, 150 meters of sediment cores from 25 locations were documented. A specialized photogrammetric technique was developed to rapidly capture the visual characteristics of the stratified sediments before sampling and reuse. Cross-polarization was applied to normalize the resulting textures for enhanced sedimentological analysis. An automated processing pipeline generated georeferenced 3D models with high-detail textures, which were integrated into the UAV-based landscape model of the Uruk-VR digital twin. This comprehensive integration of surface and subsurface data offers a foundation for three-dimensional spatial analysis of stratigraphy, facilitating the reconstruction of ancient canal systems and landscape evolution of one of the oldest cities of humankind.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100093"},"PeriodicalIF":0.0,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144510710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julian R. Rice , G. Andrew Fricker , Jonathan Ventura
{"title":"An end-to-end deep learning solution for automated LiDAR tree detection in the urban environment","authors":"Julian R. Rice , G. Andrew Fricker , Jonathan Ventura","doi":"10.1016/j.ophoto.2025.100092","DOIUrl":"10.1016/j.ophoto.2025.100092","url":null,"abstract":"<div><div>Cataloging and classifying trees in the urban environment is a crucial step in urban and environmental planning; however, manual collection and maintenance of this data is expensive and time-consuming. Although algorithmic approaches that rely on remote sensing data have been developed for tree detection in forests, they generally struggle in the more varied urban environment. This work proposes a novel end-to-end deep learning method for the detection of trees in the urban environment from remote sensing data. Specifically, we develop and train a novel PointNet-based neural network architecture to predict tree locations directly from LiDAR data augmented with multi-spectral imagery. We compare this model to a number of high-performing baselines on a large and varied dataset in the Southern California region, and find that our method outperforms all baselines in terms of tree detection ability (75.5% F-score) and positional accuracy (2.28 meter root mean squared error), while being highly efficient. We then analyze and compare the sources of errors, and how these reveal the strengths and weaknesses of each approach. Our results highlight the importance of fusing spectral and structural information for remote sensing tasks in complex urban environments.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100092"},"PeriodicalIF":0.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bradley J. Koskowich , Michael J. Starek , Scott A. King
{"title":"The potential & limitations of monoplotting in cross-view geo-localization conditions","authors":"Bradley J. Koskowich , Michael J. Starek , Scott A. King","doi":"10.1016/j.ophoto.2025.100090","DOIUrl":"10.1016/j.ophoto.2025.100090","url":null,"abstract":"<div><div>Cross-view geolocalization (CVGL) describes the general problem of determining a correlation between terrestrial and nadir oriented imagery. Classical keypoint matching methods find the extreme pose transitions between cameras present in a CVGL configuration challenging to operate in, while deep neural networks demonstrate superb capacity in this area. Traditional photogrammetry methods like structure-from-motion (SfM) or simultaneous localization and mapping (SLAM) can technically accomplish CVGL, but require a sufficiently dense collection of camera views in order to recover camera pose. This research proposes an alternative CVGL solution, a series of algorithmic operations which can completely automate the calculation of target camera pose via a less common photogrammetry method known as monoplotting, also called single camera resectioning. Monoplotting only requires three inputs, which are a target terrestrial camera image, a nadir-oriented image, and an underlying digital surface model. 2D-3D point correspondences are derived from the inputs to optimize for the target terrestrial camera pose. The proposed method applies affine keypointing, pixel color quantization, and keypoint neighbor triangulation to codify explicit relationships used to augment keypoint matching operations done in a CVGL context. These matching results are used to achieve better initial 2D-3D point correlations from monoplotting image pairs, resulting in lower error for single camera resectioning. To gauge the effectiveness of the proposed method, this proposed methodology is applied to urban, suburban, and natural environment datasets. This proposed methodology demonstrates an average 42x improvement in feature matching between CVGL image pairs, which improves on inconsistent baseline methodology by reducing translation errors between 50%–75%.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"17 ","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144523812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing beyond vegetation: A comparative occlusion analysis between Multi-View Stereo, Neural Radiance Fields and Gaussian Splatting for 3D reconstruction","authors":"Ivana Petrovska, Boris Jutzi","doi":"10.1016/j.ophoto.2025.100089","DOIUrl":"10.1016/j.ophoto.2025.100089","url":null,"abstract":"<div><div>Image-based 3D reconstruction offers realistic scene representation for applications that require accurate geometric information. Although the assumption that images are simultaneously captured, perfectly posed and noise-free simplifies the 3D reconstruction, this rarely holds in real-world settings. A real-world scene comprises multiple objects which obstruct each other and certain object parts are occluded, thus it can be challenging to generate a complete and accurate geometry. Being a part of our environment, we are particularly interested in vegetation that often obscures important structures, leading to incomplete reconstruction of the underlying features. In this contribution, we present a comparative analysis of the geometry behind vegetation occlusions reconstructed by traditional Multi-View Stereo (MVS) and radiance field methods, namely: Neural Radiance Fields (NeRFs), 3D Gaussian Splatting (3DGS) and 2D Gaussian Splatting (2DGS). Excluding certain image parts and investigating how different level of vegetation occlusions affect the geometric reconstruction, we consider Synthetic masks with different occlusion coverage of 10% (Very Sparse), 30% (Sparse), 50% (Medium), 70% (Dense) and 90% (Very Dense). To additionally demonstrate the impact of spatially consistent 3D occlusions, we use Natural masks (up to 35%) where the vegetation is stationary in the 3D scene, but relative to the view-point. Our investigations are based on real-world scenarios; one occlusion-free indoor scenario, on which we apply the Synthetic masks and one outdoor scenario, from which we derive the Natural masks. The qualitative and quantitative 3D evaluation is based on point cloud comparison against a ground truth mesh addressing accuracy and completeness. The conducted experiments and results demonstrate that although MVS shows lowest accuracy errors in both scenarios, the completeness manifests a sharp decline as the occlusion percentage increases, eventually failing under Very Dense masks. NeRFs manifest robustness in the reconstruction with highest completeness considering masks, although the accuracy proportionally decreases with increasing the occlusions. 2DGS achieves second best accuracy results outperforming NeRFs and 3DGS, indicating a consistent performance across different occlusion scenarios. Additionally, by using MVS for initialization, 3DGS and 2DGS completeness improves without significantly sacrificing the accuracy, due to the more densely reconstructed homogeneous areas. We demonstrate that radiance field methods can compete against traditional MVS, showing robust performance for a complete reconstruction under vegetation occlusions.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä
{"title":"Direct integration of ALS and MLS for real-time localization and mapping","authors":"Eugeniu Vezeteu , Aimad El Issaoui , Heikki Hyyti , Teemu Hakala , Jesse Muhojoki , Eric Hyyppä , Antero Kukko , Harri Kaartinen , Ville Kyrki , Juha Hyyppä","doi":"10.1016/j.ophoto.2025.100088","DOIUrl":"10.1016/j.ophoto.2025.100088","url":null,"abstract":"<div><div>This paper presents a novel real-time fusion pipeline for integrating georeferenced airborne laser scanning (ALS) and online mobile laser scanning (MLS) data to enable accurate localization and mapping in complex natural environments. To address sensor drift caused by relative Light Detection and Ranging (lidar) and inertial measurements, occlusion affecting the Global Navigation Satellite System (GNSS) signal quality, and differences in the fields of view of the sensors, we propose a tightly coupled lidar-inertial registration system with an adaptive, robust Iterated Error-State Extended Kalman Filter (RIEKF). By leveraging ALS-derived prior maps as a global reference, our system effectively refines the MLS registration, even in challenging environments like forests. A novel coarse-to-fine initialization technique is introduced to estimate the initial transformation between the local MLS and global ALS frames using online GNSS measurements. Experimental results in forest environments demonstrate significant improvements in both absolute and relative trajectory accuracy, with relative mean localization errors as low as 0.17 m for a prior map based on dense ALS data and 0.22 m for a prior map based on sparse ALS data. We found that while GNSS does not significantly improve registration accuracy, it is essential for providing the initial transformation between the ALS and MLS frames, enabling their direct and online fusion. The proposed system predicts poses at an inertial measurement unit (IMU) rate of 400 Hz and updates the pose at the lidar frame rate of 10 Hz.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 ","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transfer learning and single-polarized SAR image preprocessing for oil spill detection","authors":"Nataliia Kussul , Yevhenii Salii , Volodymyr Kuzin , Bohdan Yailymov , Andrii Shelestov","doi":"10.1016/j.ophoto.2024.100081","DOIUrl":"10.1016/j.ophoto.2024.100081","url":null,"abstract":"<div><div>This study addresses the challenge of oil spill detection using Synthetic Aperture Radar (SAR) satellite imagery, employing deep learning techniques to improve accuracy and efficiency. We investigated the effectiveness of various neural network architectures and encoders for this task, focusing on scenarios with limited training data. The research problem centered on enhancing feature extraction from single-channel SAR data to improve oil spill detection performance.</div><div>Our methodology involved developing a novel preprocessing pipeline that converts single-channel SAR data into a three-channel RGB representation. The preprocessing technique normalizes SAR intensity values and encodes extracted features into RGB channels.</div><div>Through an experiment, we have shown that a combination of the LinkNet with an EfficientNet-B4 is superior to pairs of other well-known architectures and encoders.</div><div>Quantitative evaluation revealed a significant improvement in F1-score of 0.064 compared to traditional dB-scale preprocessing methods. Qualitative assessment on independent SAR scenes from the Mediterranean Sea demonstrated better detection capabilities, albeit with increased sensitivity to look-alike.</div><div>We conclude that our proposed preprocessing technique shows promise for enhancing automatic oil spill segmentation from SAR imagery. The study contributes to advancing oil spill detection methods, with potential implications for environmental monitoring and marine ecosystem protection.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new unified framework for supervised 3D crown segmentation (TreeisoNet) using deep neural networks across airborne, UAV-borne, and terrestrial laser scans","authors":"Zhouxin Xi, Dani Degenhardt","doi":"10.1016/j.ophoto.2025.100083","DOIUrl":"10.1016/j.ophoto.2025.100083","url":null,"abstract":"<div><div>Accurately defining and isolating 3D tree space is critical for extracting and analyzing tree inventory attributes, yet it remains a challenge due to the structural complexity and heterogeneity within natural forests. This study introduces TreeisoNet, a suite of supervised deep neural networks tailored for robust 3D tree segmentation across natural forest environments. These networks are specifically designed to identify tree locations, stem components (if available), and crown clusters, making them adaptable to varying scales of laser scanning from airborne laser scannner (ALS), terrestrial laser scanner (TLS), and unmanned aerial vehicle (UAV). Our evaluation used three benchmark datasets with manually isolated tree references, achieving mean intersection-over-union (mIoU) accuracies of 0.81 for UAV, 0.76 for TLS, and 0.59 for ALS, which are competitive with contemporary algorithms such as ForAINet, Treeiso, Mask R-CNN, and AMS3D. Noise from stem point delineation minimally impacted stem location detection but significantly affected crown clustering. Moderate manual refinement of stem points or tree centers significantly improved tree segmentation accuracies, achieving 0.85 for UAV, 0.86 for TLS, and 0.80 for ALS. The study confirms SegFormer as an effective 3D point-level classifier and an offset-based UNet as a superior segmenter, with the latter outperforming unsupervised solutions like watershed and shortest-path methods. TreeisoNet demonstrates strong adaptability in capturing invariant tree geometry features, ensuring transferability across different resolutions, sites, and sensors with minimal accuracy loss.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zuoya Liu , Harri Kaartinen , Teemu Hakala , Heikki Hyyti , Juha Hyyppä , Antero Kukko , Ruizhi Chen
{"title":"Performance analysis of ultra-wideband positioning for measuring tree positions in boreal forest plots","authors":"Zuoya Liu , Harri Kaartinen , Teemu Hakala , Heikki Hyyti , Juha Hyyppä , Antero Kukko , Ruizhi Chen","doi":"10.1016/j.ophoto.2025.100087","DOIUrl":"10.1016/j.ophoto.2025.100087","url":null,"abstract":"<div><div>Accurate individual tree locations enable efficient forest inventory management and automation, support precise forest surveys, management decisions and future individual-tree harvesting plans. In this paper, we compared and analyzed in detail the performance of an ultra-wideband (UWB) data-driven method for mapping individual tree locations in boreal forest sample plots of varying complexity. Twelve forest sample plots selected from varying forest-stand conditions representing different developing stages, stem densities and abundance of sub canopy growth in boreal forests were tested. These plots were classified into three categories (“Easy”, “Medium” and “Difficult”) according to these varying stand conditions. The experimental results show that UWB data-driven method is able to map individual tree locations accurately with total root-mean-squared-errors (RMSEs) of 0.17 m, 0.2 m, and 0.26 m for “Easy”, “Medium” and “Difficult” forest plots, respectively, providing a strong reference for forest surveys.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting and measuring fine-scale urban tree canopy loss with deep learning and remote sensing","authors":"David Pedley, Justin Morgenroth","doi":"10.1016/j.ophoto.2025.100082","DOIUrl":"10.1016/j.ophoto.2025.100082","url":null,"abstract":"<div><div>Urban trees provide a multitude of environmental and amenity benefits for city occupants yet face ongoing risk of removal due to urban pressures and the preferences of landowners. Understanding the extent and location of canopy loss is critical for the effective management of urban forests. Although city-scale assessments of urban forest canopy cover are common, the accurate identification of fine-scale canopy loss remains challenging. Evaluating change at the property scale is of particular importance given the localised benefits of urban trees and the scale at which tree removal decisions are made.</div><div>The objective of this study was to develop a method to accurately detect and quantify the city-wide loss of urban tree canopy (UTC) at the scale of individual properties using publicly available remote sensing data. The study area was the city of Christchurch, New Zealand, with the study focussed on UTC loss that occurred between 2016 and 2021. To accurately delineate the 2016 UTC, a semantic segmentation deep learning model (DeepLabv3+) was pretrained using existing UTC data and fine-tuned using high resolution aerial imagery. The output of this model was then segmented into polygons representing individual trees using the Segment Anything Model. To overcome poor alignment of aerial imagery, LiDAR point cloud data was utilised to identify changes in height between 2016 and 2021, which was overlaid across the 2016 UTC to map areas of UTC loss. The accuracy of UTC loss predictions was validated using a visual comparison of aerial imagery and LiDAR data, with UTC loss quantified for each property within the study area.</div><div>The loss detection method achieved accurate results for the property-scale identification of UTC loss, including a mean F1 score of 0.934 and a mean IOU of 0.883. Precision values were higher than recall values (0.941 compared to 0.811), which reflected a deliberately conservative approach to avoid false positive detections. Approximately 14.5% of 2016 UTC was lost by 2021, with 74.9% of the UTC loss occurring on residential land. This research provides a novel geospatial method for evaluating fine-scale city-wide tree dynamics using remote sensing data of varying type and quality with imperfect alignment. This creates the opportunity for detailed evaluation of the drivers of UTC loss on individual properties to enable better management of existing urban forests.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Schill , Christoph Holst , Daniel Wujanz , Jens Hartmann , Jens-André Paffenholz
{"title":"Intensity-based stochastic model of terrestrial laser scanners: Methodological workflow, empirical derivation and practical benefit","authors":"Florian Schill , Christoph Holst , Daniel Wujanz , Jens Hartmann , Jens-André Paffenholz","doi":"10.1016/j.ophoto.2024.100079","DOIUrl":"10.1016/j.ophoto.2024.100079","url":null,"abstract":"<div><div>After more than twenty years of commercial use, laser scanners have reached technical maturity and consequently became a standard tool for 3D-data acquisition across various fields of application. Yet, meaningful stochastic information regarding the achieved metric quality of recorded points remains an open research question. Recent research demonstrated that raw intensity values can be deployed to derive stochastic models for reflectorless rangefinders. Yet, all existing studies focused on single instances of particular laser scanners while the derivation of the stochastic models required significant efforts.</div><div>Motivated by the aforementioned shortcomings, the focus of this study is set on the comparison of stochastic models for a series of eight identical phase-based scanners that differ in age, working hours and date of last calibration. In order to achieve this, a standardised methodological workflow is suggested to derive the unknown parameters of the individual stochastic models. Based on the generated outcome, a comparison is conducted which clarifies if a universally applicable stochastic model (type calibration) can be used for a particular scanner model or if individual parameter sets are still required for every scanner (instance calibration) to validate the practical benefit and usability of those models. The generated results successfully demonstrate that the computed stochastic model is transferable to all individual scanners of the series.</div></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"15 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}