ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
Transfer learning from citizen science photographs enables plant species identification in UAV imagery 公民科学照片的迁移学习使无人机图像中的植物物种识别成为可能
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-08-01 DOI: 10.1016/j.ophoto.2022.100016
Salim Soltani , Hannes Feilhauer , Robbert Duker , Teja Kattenborn
{"title":"Transfer learning from citizen science photographs enables plant species identification in UAV imagery","authors":"Salim Soltani ,&nbsp;Hannes Feilhauer ,&nbsp;Robbert Duker ,&nbsp;Teja Kattenborn","doi":"10.1016/j.ophoto.2022.100016","DOIUrl":"10.1016/j.ophoto.2022.100016","url":null,"abstract":"<div><p>Accurate information on the spatial distribution of plant species and communities is in high demand for various fields of application, such as nature conservation, forestry, and agriculture. A series of studies has shown that Convolutional Neural Networks (CNNs) accurately predict plant species and communities in high-resolution remote sensing data, in particular with data at the centimeter scale acquired with Unoccupied Aerial Vehicles (UAV). However, such tasks often require ample training data, which is commonly generated in the field via geocoded in-situ observations or labeling remote sensing data through visual interpretation. Both approaches are laborious and can present a critical bottleneck for CNN applications. An alternative source of training data is given by using knowledge on the appearance of plants in the form of plant photographs from citizen science projects such as the iNaturalist database. Such crowd-sourced plant photographs typically exhibit very different perspectives and great heterogeneity in various aspects, yet the sheer volume of data could reveal great potential for application to bird’s eye views from remote sensing platforms. Here, we explore the potential of transfer learning from such a crowd-sourced data treasure to the remote sensing context. Therefore, we investigate firstly, if we can use crowd-sourced plant photographs for CNN training and subsequent mapping of plant species in high-resolution remote sensing imagery. Secondly, we test if the predictive performance can be increased by a priori selecting photographs that share a more similar perspective to the remote sensing data. We used two case studies to test our proposed approach with multiple RGB orthoimages acquired from UAV with the target plant species <em>Fallopia japonica</em> and <em>Portulacaria afra</em> respectively. Our results demonstrate that CNN models trained with heterogeneous, crowd-sourced plant photographs can indeed predict the target species in UAV orthoimages with surprising accuracy. Filtering the crowd-sourced photographs used for training by acquisition properties increased the predictive performance. This study demonstrates that citizen science data can effectively anticipate a common bottleneck for vegetation assessments and provides an example on how we can effectively harness the ever-increasing availability of crowd-sourced and big data for remote sensing applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"5 ","pages":"Article 100016"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000059/pdfft?md5=75907267adbd64f9e59415290458683d&pid=1-s2.0-S2667393222000059-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91506615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generating impact maps from bomb craters automatically detected in aerial wartime images using marked point processes 利用标记点处理技术,从战时航空图像中自动检测到的弹坑生成撞击图
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-08-01 DOI: 10.1016/j.ophoto.2022.100017
Christian Kruse, Dennis Wittich, Franz Rottensteiner, Christian Heipke
{"title":"Generating impact maps from bomb craters automatically detected in aerial wartime images using marked point processes","authors":"Christian Kruse,&nbsp;Dennis Wittich,&nbsp;Franz Rottensteiner,&nbsp;Christian Heipke","doi":"10.1016/j.ophoto.2022.100017","DOIUrl":"10.1016/j.ophoto.2022.100017","url":null,"abstract":"<div><p>Even more than 75 years after the Second World War, numerous unexploded bombs (duds) linger in the ground and pose a considerable hazard to society. The areas containing these duds are documented in so-called impact maps, which are based on locations of exploded bombs; these locations can be found in aerial images taken shortly after bombing. To generate impact maps, in this paper we present a novel approach based on marked point processes (MPPs) for the automatic detection of bomb craters in such images, some of which are overlapping. The object model for the craters is represented by circles and is embedded in the MPP-framework. By means of stochastic sampling, the most likely configuration of objects within the scene is determined. Each configuration is evaluated using an energy function that describes the consistency with a predefined object model. High gradient magnitudes along the object borders and homogeneous grey values inside the objects are favoured, while overlaps between objects are penalized. Reversible Jump Markov Chain Monte Carlo sampling, in combination with simulated annealing, provides the global optimum of the energy function. Our procedure allows the combination of individual detection results covering the same location. Afterwards, a probability map for duds is generated from the detections via kernel density estimation and areas around the detections are classified as contaminated, resulting in an impact map. Our results, based on 74 aerial wartime images taken over different areas in Central Europe, show the potential of the method; among other findings, a clear improvement is achieved by using redundant image information. We also compared the MPP method for bomb crater detection with a state-of-of-the-art convolutional neural network (CNN) for generating region proposals; it turned out that the CNN outperforms the MPPs if a sufficient amount of representative training data is available and a threshold for a region to be considered as crater is properly tuned prior to running the experiments. If this is not the case, the MPP approach achieves better results.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"5 ","pages":"Article 100017"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000060/pdfft?md5=5aff4775e9ff2fa48f8a80a31243d874&pid=1-s2.0-S2667393222000060-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75452696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatially autocorrelated training and validation samples inflate performance assessment of convolutional neural networks 空间自相关训练和验证样本膨胀了卷积神经网络的性能评估
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-08-01 DOI: 10.1016/j.ophoto.2022.100018
Teja Kattenborn , Felix Schiefer , Julian Frey , Hannes Feilhauer , Miguel D. Mahecha , Carsten F. Dormann
{"title":"Spatially autocorrelated training and validation samples inflate performance assessment of convolutional neural networks","authors":"Teja Kattenborn ,&nbsp;Felix Schiefer ,&nbsp;Julian Frey ,&nbsp;Hannes Feilhauer ,&nbsp;Miguel D. Mahecha ,&nbsp;Carsten F. Dormann","doi":"10.1016/j.ophoto.2022.100018","DOIUrl":"10.1016/j.ophoto.2022.100018","url":null,"abstract":"<div><p>Deep learning and particularly Convolutional Neural Networks (CNN) in concert with remote sensing are becoming standard analytical tools in the geosciences. A series of studies has presented the seemingly outstanding performance of CNN for predictive modelling. However, the predictive performance of such models is commonly estimated using random cross-validation, which does not account for spatial autocorrelation between training and validation data. Independent of the analytical method, such spatial dependence will inevitably inflate the estimated model performance. This problem is ignored in most CNN-related studies and suggests a flaw in their validation procedure. Here, we demonstrate how neglecting spatial autocorrelation during cross-validation leads to an optimistic model performance assessment, using the example of a tree species segmentation problem in multiple, spatially distributed drone image acquisitions. We evaluated CNN-based predictions with test data sampled from 1) randomly sampled hold-outs and 2) spatially blocked hold-outs. Assuming that a block cross-validation provides a realistic model performance, a validation with randomly sampled holdouts overestimated the model performance by up to 28%. Smaller training sample size increased this optimism. Spatial autocorrelation among observations was significantly higher within than between different remote sensing acquisitions. Thus, model performance should be tested with spatial cross-validation strategies and multiple independent remote sensing acquisitions. Otherwise, the estimated performance of any geospatial deep learning method is likely to be overestimated.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"5 ","pages":"Article 100018"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000072/pdfft?md5=112b47be1e0715b227d1a39209c56b78&pid=1-s2.0-S2667393222000072-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76884384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data 神经网络与k近邻方法在机载激光林分变量估计中的比较
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-04-01 DOI: 10.1016/j.ophoto.2022.100012
Andras Balazs, Eero Liski, Sakari Tuominen, Annika Kangas
{"title":"Comparison of neural networks and k-nearest neighbors methods in forest stand variable estimation using airborne laser data","authors":"Andras Balazs,&nbsp;Eero Liski,&nbsp;Sakari Tuominen,&nbsp;Annika Kangas","doi":"10.1016/j.ophoto.2022.100012","DOIUrl":"10.1016/j.ophoto.2022.100012","url":null,"abstract":"<div><p>In the remote sensing of forests, point cloud data from airborne laser scanning contains high-value information for predicting the volume of growing stock and the size of trees. At the same time, laser scanning data allows a very high number of potential features that can be extracted from the point cloud data for predicting the forest variables. In some methods, the features are first extracted by user-defined algorithms and the best features are selected based on supervised learning, whereas both tasks can be carried out automatically by deep learning methods typically based on deep neural networks. In this study we tested k-nearest neighbor method combined with genetic algorithm (k-NN), artificial neural network (ANN), 2-dimensional convolutional neural network (2D-CNN) and 3-dimensional CNN (3D-CNN) for estimating the following forest variables: volume of growing stock, stand mean height and mean diameter. The results indicate that there were no major differences in the accuracy of the tested methods, but the ANN and 3D-CNN generally resulted in the lowest RMSE values for the predicted forest variables and the highest R<sup>2</sup> values between the predicted and observed forest variables. The lowest RMSE scores were 20.3% (3D-CNN), 6.4% (3D-CNN) and 11.2% (ANN) and the highest R<sup>2</sup> results 0.90 (3D-CNN), 0.95 (3D-CNN) and 0.85 (ANN) for volume of growing stock, stand mean height and mean diameter, respectively. Covariances of all response variable combinations and all predictions methods were lower than corresponding covariances of the field observations. ANN predictions had the highest covariances for mean height vs. mean diameter and total growing stock vs. mean diameter combinations and 3D-CNN for mean height vs. total growing stock. CNNs have distinct theoretical advantage over the other methods in complex recognition or classification tasks, but the utilization of their full potential may possibly require higher point density clouds than applied here. Thus, the relatively low density of the point clouds data may have been a contributing factor to the somewhat inconclusive ranking of the methods in this study. The input data and computer codes are available at: <span>https://github.com/balazsan/ALS_NNs</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"4 ","pages":"Article 100012"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000011/pdfft?md5=975880c7524bbf3b179db86e3e611df1&pid=1-s2.0-S2667393222000011-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81241351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Geometric calibration of a hyperspectral frame camera with simultaneous determination of sensors misalignment 高光谱框架相机的几何定标与传感器偏差同步检测
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-04-01 DOI: 10.1016/j.ophoto.2022.100015
Lucas D. Santos , Antonio M.G. Tommaselli , Adilson Berveglieri , Nilton N. Imai , Raquel A. Oliveira , Eija Honkavaara
{"title":"Geometric calibration of a hyperspectral frame camera with simultaneous determination of sensors misalignment","authors":"Lucas D. Santos ,&nbsp;Antonio M.G. Tommaselli ,&nbsp;Adilson Berveglieri ,&nbsp;Nilton N. Imai ,&nbsp;Raquel A. Oliveira ,&nbsp;Eija Honkavaara","doi":"10.1016/j.ophoto.2022.100015","DOIUrl":"10.1016/j.ophoto.2022.100015","url":null,"abstract":"<div><p>The recent development of lightweight and relatively low-cost hyperspectral sensors has created new perspectives for remote sensing applications. This study aimed to investigate the geometric calibration of a hyperspectral frame camera based on a tuneable Fabry–Pérot interferometer (FPI) and two sensors. The radiation passes through the optics and then through the FPI, where it is redirected to two sensors using a beam-splitting prism. Previous studies have shown significant variations between the interior orientation parameters for the different bands, both between bands of the same sensor and between different sensors, and that these variations are due to the principle of image acquisition. Discrepancies of tens of pixels were obtained by comparing image coordinates measured in different bands. In this research, it was proposed to calibrate this camera in a static mode with changes in the mathematical calibration model. The restriction of obtaining only one set of exterior orientation parameters by hypercube was applied, adding parameters related to the misalignment between the sensors and parameters of a linear function relating the camera principal distance to the wavelength values. The application of the parameters estimated with this approach reduced the discrepancies between image coordinates measured in different bands to values smaller than one pixel. Using the sensor calibration parameters in the mobile UAV operation in an aerial bundle adjustment reduced the root mean square error (RMSE) on checkpoints by approximately 20% compared to the traditional model in which the interior orientation parameters and lens distortions were calibrated for each band separately. Thus, it was possible to obtain accurate results that make the use of this camera more practical since only one set of calibration parameters for all bands is needed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"4 ","pages":"Article 100015"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000047/pdfft?md5=9166e626b62083c591e90f3c63108e25&pid=1-s2.0-S2667393222000047-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85853357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy 图像和激光雷达数据的混合地理参考,用于基于无人机的毫米精度的点云收集
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-04-01 DOI: 10.1016/j.ophoto.2022.100014
Norbert Haala , Michael Kölle , Michael Cramer , Dominik Laupheimer , Florian Zimmermann
{"title":"Hybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy","authors":"Norbert Haala ,&nbsp;Michael Kölle ,&nbsp;Michael Cramer ,&nbsp;Dominik Laupheimer ,&nbsp;Florian Zimmermann","doi":"10.1016/j.ophoto.2022.100014","DOIUrl":"10.1016/j.ophoto.2022.100014","url":null,"abstract":"<div><p>During the last two decades, UAV emerged as standard platform for photogrammetric data collection. Main motivation in that early phase was the cost effective airborne image collection at areas of limited size. This was already feasible by rather simple payloads like an off-the-shelf, compact camera and a navigation-grade GNSS sensor. Meanwhile, dedicated sensor systems enable applications that have not been feasible in the past. One example is the airborne collection of dense 3D point clouds at millimetre accuracies, which will be discussed in our paper. For this purpose, we collect both LiDAR and image data from a joint UAV platform and apply a so-called hybrid georeferencing. This process integrates photogrammetric bundle block adjustment with direct georeferencing of LiDAR point clouds. By these means georeferencing accuracy is improved for the LiDAR point cloud by an order of magnitude. We demonstrate the feasibility of our approach in the context of a project, which aims on monitoring of subsidence of about 10 mm/year. The respective area of interest is defined by a ship lock and its vicinity of mixed use. In that area, multiple UAV flights were captured and evaluated for a period of three years. As our main contribution, we demonstrate that 3D point accuracies at sub-centimetre level can be achieved. This is realized by joint orientation of laser scans and images in a hybrid adjustment framework, which enables accuracies corresponding to the GSD of the captured imagery.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"4 ","pages":"Article 100014"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000035/pdfft?md5=8693473cff874d6c0ae0f381eed371bf&pid=1-s2.0-S2667393222000035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82934247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Detection of anomalous vehicle trajectories using federated learning 使用联邦学习的异常车辆轨迹检测
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-04-01 DOI: 10.1016/j.ophoto.2022.100013
Christian Koetsier , Jelena Fiosina , Jan N. Gremmel , Jörg P. Müller , David M. Woisetschläger , Monika Sester
{"title":"Detection of anomalous vehicle trajectories using federated learning","authors":"Christian Koetsier ,&nbsp;Jelena Fiosina ,&nbsp;Jan N. Gremmel ,&nbsp;Jörg P. Müller ,&nbsp;David M. Woisetschläger ,&nbsp;Monika Sester","doi":"10.1016/j.ophoto.2022.100013","DOIUrl":"10.1016/j.ophoto.2022.100013","url":null,"abstract":"<div><p>Nowadays mobile positioning devices, such as global navigation satellite systems (GNSS) but also external sensor technology like cameras allow an efficient online collection of trajectories, which reflect the behavior of moving objects, such as cars. The data can be used for various applications, e.g., traffic planning or updating maps, which need many trajectories to extract and infer the desired information, especially when machine or deep learning approaches are used. Often, the amount and diversity of necessary data exceeds what can be collected by individuals or even single companies. Currently, data owners, e.g., vehicle producers or service operators, are reluctant to share data due to data privacy rules or because of the risk of sharing information with competitors, which could jeopardize the data owner's competitive advantage. A promising approach to exploit data from several data owners, but still not directly accessing the data, is the concept of federated learning, that allows collaborative learning without exchanging raw data, but only model parameters.</p><p>In this paper, we address the problem of anomaly detection in vehicle trajectories, and investigate the benefits of using federated learning. To this end, we apply several state-of-the-art learning algorithms like one-class support vector machine (OCSVM) and isolation forest, thus solving a one-class classification problem. Based on these learning mechanisms, we successfully proposed and verified a federated architecture for the collaborative identification of anomalous trajectories at several intersections. We demonstrate that the federated approach is beneficial not only to improve the overall anomaly detection accuracy, but also for each individual data owner. The experiments show that federated learning allows to increase the anomaly detection accuracy from in average AUC-ROC scores of 97% by individual intersections up to 99% using cooperation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"4 ","pages":"Article 100013"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000023/pdfft?md5=c05f69f23b9ea7487ed2ef3da9993685&pid=1-s2.0-S2667393222000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82330255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Pavement distress detection using terrestrial laser scanning point clouds – Accuracy evaluation and algorithm comparison 地面激光扫描点云路面破损检测。精度评价和算法比较
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-01-01 DOI: 10.1016/j.ophoto.2021.100010
Ziyi Feng , Aimad El Issaoui , Matti Lehtomäki , Matias Ingman , Harri Kaartinen , Antero Kukko , Joona Savela , Hannu Hyyppä , Juha Hyyppä
{"title":"Pavement distress detection using terrestrial laser scanning point clouds – Accuracy evaluation and algorithm comparison","authors":"Ziyi Feng ,&nbsp;Aimad El Issaoui ,&nbsp;Matti Lehtomäki ,&nbsp;Matias Ingman ,&nbsp;Harri Kaartinen ,&nbsp;Antero Kukko ,&nbsp;Joona Savela ,&nbsp;Hannu Hyyppä ,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2021.100010","DOIUrl":"10.1016/j.ophoto.2021.100010","url":null,"abstract":"<div><p>In this paper, we compared five crack detection algorithms using terrestrial laser scanner (TLS) point clouds. The methods are developed based on common point cloud processing knowledge in along- and across-track profiles, surface fitting or local pointwise features, with or without machine learning. The crack area and volume were calculated from the crack points detected by the algorithms. The completeness, correctness, and F<sub>1</sub> score of each algorithm were computed against manually collected references. Ten 1-m-by-3.5-m plots containing 75 distresses of six distress types (depression, disintegration, pothole, longitudinal, transverse, and alligator cracks) were selected to explain variability of distresses from a 3-km-long-road. For crack detection at plot level, the best algorithm achieved a completeness of up to 0.844, a correctness of up to 0.853, and an F<sub>1</sub> score of up to 0.849. The best algorithm’s overall (ten plots combined) completeness, correctness, and F<sub>1</sub> score were 0.642, 0.735, and 0.685 respectively. For the crack area estimation, the overall mean absolute percentage errors (MAPE) of the two best algorithms were 19.8% and 20.3%. In the crack volume estimation, the two best algorithms resulted in 19.3% and 14.5% MAPE. When the plots were grouped based on crack detection complexity, in the ‘easy’ category, the best algorithm reached a crack area estimation MAPE of 8.9%, while for crack volume estimation, the MAPE obtained from the best algorithm was 0.7%.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"3 ","pages":"Article 100010"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000107/pdfft?md5=09bfe8cd60354c21430255cf71ad1419&pid=1-s2.0-S2667393221000107-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84208023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Semantic segmentation of point cloud data using raw laser scanner measurements and deep neural networks 基于原始激光扫描测量和深度神经网络的点云数据语义分割
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-01-01 DOI: 10.1016/j.ophoto.2021.100011
Risto Kaijaluoto , Antero Kukko , Aimad El Issaoui , Juha Hyyppä , Harri Kaartinen
{"title":"Semantic segmentation of point cloud data using raw laser scanner measurements and deep neural networks","authors":"Risto Kaijaluoto ,&nbsp;Antero Kukko ,&nbsp;Aimad El Issaoui ,&nbsp;Juha Hyyppä ,&nbsp;Harri Kaartinen","doi":"10.1016/j.ophoto.2021.100011","DOIUrl":"10.1016/j.ophoto.2021.100011","url":null,"abstract":"<div><p>Deep learning methods based on convolutional neural networks have shown to give excellent results in semantic segmentation of images, but the inherent irregularity of point cloud data complicates their usage in semantically segmenting 3D laser scanning data. To overcome this problem, point cloud networks particularly specialized for the purpose have been implemented since 2017 but finding the most appropriate way to semantically segment point clouds is still an open research question. In this study we attempted semantic segmentation of point cloud data with convolutional neural networks by using only the raw measurements provided by a multiple echo detection capable profiling laser scanner. We formatted the measurements to a series of 2D rasters, where each raster contains the measurements (range, reflectance, echo deviation) of a single scanner mirror rotation to be able to use the rich research done on semantic segmentation of 2D images with convolutional neural networks. Similar approach for profiling laser scanner in forest context has never been proposed before. A boreal forest in Evo region near Hämeenlinna in Finland was used as experimental study area. The data was collected with FGI Akhka-R3 backpack laser scanning system, georeferenced and then manually labelled to ground, understorey, tree trunk and foliage classes for training and evaluation purposes. The labelled points were then transformed back to 2D rasters and used for training three different neural network architectures. Further, the same georeferenced data in point cloud format was used for training the state-of-the-art point cloud semantic segmentation network RandLA-Net and the results were compared with those of our method. Our best semantic segmentation network reached the mean Intersection-over-Union value of 80.1% and it is comparable to the 80.6% reached by the point cloud -based RandLA-Net. The numerical results and visual analysis of the resulting point clouds show that our method is a valid way of doing semantic segmentation of point clouds at least in the forest context. The labelled datasets were also released to the research community.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"3 ","pages":"Article 100011"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000119/pdfft?md5=8ea1ec85b081902764753675bec71cac&pid=1-s2.0-S2667393221000119-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74155914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Deep learning approach for Sentinel-1 surface water mapping leveraging Google Earth Engine 基于Google Earth Engine的Sentinel-1地表水制图深度学习方法
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-12-01 DOI: 10.1016/j.ophoto.2021.100005
Timothy Mayer , Ate Poortinga , Biplov Bhandari , Andrea P. Nicolau , Kel Markert , Nyein Soe Thwal , Amanda Markert , Arjen Haag , John Kilbride , Farrukh Chishtie , Amit Wadhwa , Nicholas Clinton , David Saah
{"title":"Deep learning approach for Sentinel-1 surface water mapping leveraging Google Earth Engine","authors":"Timothy Mayer ,&nbsp;Ate Poortinga ,&nbsp;Biplov Bhandari ,&nbsp;Andrea P. Nicolau ,&nbsp;Kel Markert ,&nbsp;Nyein Soe Thwal ,&nbsp;Amanda Markert ,&nbsp;Arjen Haag ,&nbsp;John Kilbride ,&nbsp;Farrukh Chishtie ,&nbsp;Amit Wadhwa ,&nbsp;Nicholas Clinton ,&nbsp;David Saah","doi":"10.1016/j.ophoto.2021.100005","DOIUrl":"10.1016/j.ophoto.2021.100005","url":null,"abstract":"<div><p>Satellite remote sensing plays an important role in mapping the location and extent of surface water. A variety of approaches are available for mapping surface water, but deep learning approaches are not commonplace as they are ‘data hungry’ and require large amounts of computational resources. However, with the availability of various satellite sensors and rapid development in cloud computing, the remote sensing scientific community is adapting modern deep learning approaches. The new integration of cloud-based Google AI platform and Google Earth Engine enables users to deploy calculations at scale. In this paper, we investigate two methods of automatic data labeling: 1. the Joint Research Centre (JRC) surface water maps; 2. an Edge-Otsu dynamic threshold approach. We deployed a U-Net convolutional neural network to map surface water from Sentinel-1 Synthetic Aperture Radar (SAR) data and tested the model performance using different hyperparameter tuning combinations to identify the optimal learning rate and loss function. The performance was then evaluated using an independent validation data set. We tested 12 models overall and found that the models utilizing the JRC data labels showed a better model performance, with F1-scores ranging from 0.972 to 0.986 for the training test and validation efforts. Additionally, an independently sampled high-resolution data set was used to further evaluate model performance. From this independent validation effort we observed models leveraging JRC data labels produced F1-Scores ranging from 0.9130.922. A pairwise comparison of models, through varying input data, learning rates, and loss functions constituents, revealed the JRC Adjusted Binary Cross Entropy Dice model to be statistically different than the 66 other model combinations and displayed the highest relative evaluations metrics including accuracy, precision score, Cohen Kappa coefficient, and F1-score. These results are in the same range as many of the conventional methods. We observed that the integration of Google AI Platform into Google Earth Engine can be a powerful tool to deploy deep-learning algorithms at scale and that automatic data labeling can be an effective strategy in the development of deep-learning models, however independent data validation remains an important step in model evaluation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100005"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000053/pdfft?md5=89bd0904ac557bdf32d929ccca7af5da&pid=1-s2.0-S2667393221000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83452163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信