ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
Precision estimation of 3D objects using an observation distribution model in support of terrestrial laser scanner network design 基于观测分布模型的三维目标精度估计,支持地面激光扫描器网络设计
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100035
D.D. Lichti , T.O. Chan , Kate Pexman
{"title":"Precision estimation of 3D objects using an observation distribution model in support of terrestrial laser scanner network design","authors":"D.D. Lichti ,&nbsp;T.O. Chan ,&nbsp;Kate Pexman","doi":"10.1016/j.ophoto.2023.100035","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100035","url":null,"abstract":"<div><p>First order geometric network design is an important quality assurance process for terrestrial laser scanning of complex built environments for the construction of digital as-built models. A key design task is the determination of a set of instrument locations or viewpoints that provide complete site coverage while meeting quality criteria. Although simplified point precision measures are often used in this regard, precision measures for common geometric objects found in the built environment—planes, cylinders and spheres—are arguably more relevant indicators of as-built model quality. The computation of such measures at the design stage—which is not currently done—requires generation of artificial observations by ray casting, which can be a dissuasive factor for their adoption. This paper presents models for the rigorous computation of geometric object precision without the need for ray casting. Instead, a model for the 2D distribution of angular observations is coupled with candidate viewpoint-object geometry to derive the covariance matrix of parameters. Three-dimensional models are developed and tested for vertical cylinders, spheres and vertical, horizontal and tilted planes. Precision estimates from real experimental data were used as the reference for assessing the accuracy of the predicted precision—specifically the standard deviation—of the parameters of these objects. Results show that the mean accuracy of the model-predicted precision was 4.3% (of the read data value) or better for the planes, regardless of plane orientation. The mean accuracy of the cylinders was up to 6.2%. Larger differences were found for some datasets due to incomplete object coverage with the reference data, not due to the model. Mean precision for the spheres was similar, up to 6.1%, following adoption of a new model for deriving the angular scanning limits. The computational advantage of the proposed method over precision estimates from simulated, high-resolution point clouds is also demonstrated. The CPU time required to estimate precision can be reduced by up to three orders of magnitude. These results demonstrate the utility of the derived models for efficiently determining object precision in 3D network design in support of scanning surveys for reality capture.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100035"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards global scale segmentation with OpenStreetMap and remote sensing 基于OpenStreetMap和遥感的全球尺度分割
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100031
Munazza Usmani , Maurizio Napolitano , Francesca Bovolo
{"title":"Towards global scale segmentation with OpenStreetMap and remote sensing","authors":"Munazza Usmani ,&nbsp;Maurizio Napolitano ,&nbsp;Francesca Bovolo","doi":"10.1016/j.ophoto.2023.100031","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100031","url":null,"abstract":"<div><p>Land Use Land Cover (LULC) segmentation is a famous application of remote sensing in an urban environment. Up-to-date and complete data are of major importance in this field. Although with some success, pixel-based segmentation remains challenging because of class variability. Due to the increasing popularity of crowd-sourcing projects, like OpenStreetMap, the need for user-generated content has also increased, providing a new prospect for LULC segmentation. We propose a deep-learning approach to segment objects in high-resolution imagery by using semantic crowdsource information. Due to satellite imagery and crowdsource database complexity, deep learning frameworks perform a significant role. This integration reduces computation and labor costs. Our methods are based on a fully convolutional neural network (CNN) that has been adapted for multi-source data processing. We discuss the use of data augmentation techniques and improvements to the training pipeline. We applied semantic (U-Net) and instance segmentation (Mask R-CNN) methods and, Mask R–CNN showed a significantly higher segmentation accuracy from both qualitative and quantitative viewpoints. The conducted methods reach 91% and 96% overall accuracy in building segmentation and 90% in road segmentation, demonstrating OSM and remote sensing complementarity and potential for city sensing applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100031"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pixel-based mapping of open field and protected agriculture using constrained Sentinel-2 data 利用受约束的Sentinel-2数据进行开放农田和受保护农业的基于像素的制图
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100033
Daniele la Cecilia , Manu Tom , Christian Stamm , Daniel Odermatt
{"title":"Pixel-based mapping of open field and protected agriculture using constrained Sentinel-2 data","authors":"Daniele la Cecilia ,&nbsp;Manu Tom ,&nbsp;Christian Stamm ,&nbsp;Daniel Odermatt","doi":"10.1016/j.ophoto.2023.100033","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100033","url":null,"abstract":"<div><p>Protected agriculture boosts the production of vegetables, berries and fruits, and it plays a pivotal role in guaranteeing food security globally in the face of climate change. Remote sensing is proven to be useful for identifying the presence of (low-tech) plastic greenhouses and plastic mulches. However, the classification accuracy notoriously decreases in the presence of small-scale farming, heterogeneous land cover and unaccounted seasonal management of protected agriculture. Here, we present the random forest-based pixel-level Open field and Protected Agriculture land cover Classifier (OPAC) developed using Sentinel-2 L2A data. OPAC is trained using tiles from Switzerland over 2 years and the Almeria region in Spain over 1 acquisition day. OPAC classifies eight land covers typical of open field and protected agriculture (plastic mulches, low-tech greenhouses and for the first time high-tech greenhouses). Finally, we assess (1) how the land covers in OPAC are labelled in the Sentinel-2 Scene Classification Layer (SCL) and (2) the correspondence between pixels classified as protected agriculture by OPAC and by the best performing Advanced Plastic Greenhouse Index (APGI). To reduce anthropogenic land covers, we constrain the classification task to agricultural areas retrieved from cadastral data or the Corine Land Cover map. The 5-fold cross-validation reveals an overall accuracy of 92% but other classification scores are moderate when keeping the separation among the three classes of protected agriculture. However, all scores substantially improve upon grouping the three classes into one (with an Intersection Over Union of 0.58 as an average among the scores of the three classes and of 0.98 for one single class). Given the recently acknowledged importance of Sentinel-2 Band 1 (central wavelength of 443 nm), the classification accuracy of OPAC for the Swiss small-scale farming is mostly limited by the band's reduced spatial accuracy (60 m). A careful visual assessment indicates that OPAC achieves satisfactory generalization capabilities also in North European (the Netherlands) and four Mediterranean areas (Spain, Italy, Crete and Turkey) without the need of adding location and temporal specific information. There is good agreement among natural land covers classified by OPAC and the SCL. However, the SCL does not have a class for protected agriculture, the latter being often classified as clouds. APGI achieved similar to lower classification accuracies than OPAC. Importantly, the APGI classification task depends on a user-defined space- and time-specific threshold, whereas OPAC does not. Therefore, OPAC paves the way for rapid mapping of protected agriculture at continental scale.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards complete tree crown delineation by instance segmentation with Mask R–CNN and DETR using UAV-based multispectral imagery and lidar data 利用基于无人机的多光谱图像和激光雷达数据,利用掩模R-CNN和DETR进行实例分割,实现完整的树冠描绘
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100037
S. Dersch , A. Schöttl , P. Krzystek , M. Heurich
{"title":"Towards complete tree crown delineation by instance segmentation with Mask R–CNN and DETR using UAV-based multispectral imagery and lidar data","authors":"S. Dersch ,&nbsp;A. Schöttl ,&nbsp;P. Krzystek ,&nbsp;M. Heurich","doi":"10.1016/j.ophoto.2023.100037","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100037","url":null,"abstract":"<div><p>Precise single tree delineation allows for a more reliable determination of essential parameters such as tree species, height and vitality. Methods of instance segmentation are powerful neural networks for detecting and segmenting single objects and have the potential to push the accuracy of tree segmentation methods to a new level. In this study, two instance segmentation methods, Mask R–CNN and DETR, were applied to precisely delineate single tree crowns using multispectral images and images generated from UAV lidar data. The study area was in Bavaria, 35 km north of Munich (Germany), comprising a mixed forest stand of around 7 ha characterised mainly by Norway spruce (<em>Picea abies</em>) and large groups of European beeches (<em>Fagus sylvatica</em>) with 181–236 trees per ha. The data set, consisting of multispectral images and lidar data, was acquired using a Micasense RedEdge-MX dual camera system and a Riegl miniVUX-1UAV lidar scanner, both mounted on a hexacopter (DJI Matrice 600 Pro). At an altitude of approximately 85 m, two flight missions were conducted at an airspeed of 5 m/s, leading to a ground resolution of 5 cm and a lidar point density of 560 points/<em>m</em><sup>2</sup>. In total, 1408 trees were marked by visual interpretation of the remote sensing data for training and validating the classifiers. Additionally, 125 trees were surveyed by tacheometric means used to test the optimized neural networks. The evaluations showed that segmentation using only multispectral imagery performed slightly better than with images generated from lidar data. In terms of F1 score, Mask R–CNN with color infrared (CIR) images achieved 92% in coniferous, 85% in deciduous and 83% in mixed stands. Compared to the images generated by lidar data, these scores are the same for coniferous and slightly worse for deciduous and mixed plots, by 4% and 2%, respectively. DETR with CIR images achieved 90% in coniferous, 81% in deciduous and 84% in mixed stands. These scores were 2%, 1%, and 2% worse, respectively, compared to the lidar data images in the same test areas. Interestingly, four conventional segmentation methods performed significantly worse than CIR-based and lidar-based instance segmentations. Additionally, the results revealed that tree crowns were more accurately segmented by instance segmentation. All in all, the results highlight the practical potential of the two deep learning-based tree segmentation methods, especially in comparison to baseline methods.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
UAV-based reference data for the prediction of fractional cover of standing deadwood from Sentinel time series 基于无人机的哨兵时间序列枯木覆盖度预测参考数据
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100034
Felix Schiefer , Sebastian Schmidtlein , Annett Frick , Julian Frey , Randolf Klinke , Katarzyna Zielewska-Büttner , Samuli Junttila , Andreas Uhl , Teja Kattenborn
{"title":"UAV-based reference data for the prediction of fractional cover of standing deadwood from Sentinel time series","authors":"Felix Schiefer ,&nbsp;Sebastian Schmidtlein ,&nbsp;Annett Frick ,&nbsp;Julian Frey ,&nbsp;Randolf Klinke ,&nbsp;Katarzyna Zielewska-Büttner ,&nbsp;Samuli Junttila ,&nbsp;Andreas Uhl ,&nbsp;Teja Kattenborn","doi":"10.1016/j.ophoto.2023.100034","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100034","url":null,"abstract":"<div><p>Increasing tree mortality due to climate change has been observed globally. Remote sensing is a suitable means for detecting tree mortality and has been proven effective for the assessment of abrupt and large-scale stand-replacing disturbances, such as those caused by windthrow, clear-cut harvesting, or wildfire. Non-stand replacing tree mortality events (e.g., due to drought) are more difficult to detect with satellite data – especially across regions and forest types. A common limitation for this is the availability of spatially explicit reference data. To address this issue, we propose an automated generation of reference data using uncrewed aerial vehicles (UAV) and deep learning-based pattern recognition. In this study, we used convolutional neural networks (CNN) to semantically segment crowns of standing dead trees from 176 UAV-based very high-resolution (&lt;4 cm) RGB-orthomosaics that we acquired over six regions in Germany and Finland between 2017 and 2021. The local-level CNN-predictions were then extrapolated to landscape-level using Sentinel-1 (i.e., backscatter and interferometric coherence), Sentinel-2 time series, and long short term memory networks (LSTM) to predict the cover fraction of standing deadwood per Sentinel-pixel. The CNN-based segmentation of standing deadwood from UAV imagery was accurate (F1-score = 0.85) and consistent across the different study sites and years. Best results for the LSTM-based extrapolation of fractional cover of standing deadwood using Sentinel-1 and -2 time series were achieved using all available Sentinel-1 and --2 bands, kernel normalized difference vegetation index (kNDVI), and normalized difference water index (NDWI) (Pearson’s r = 0.66, total least squares regression slope = 1.58). The landscape-level predictions showed high spatial detail and were transferable across regions and years. Our results highlight the effectiveness of deep learning-based algorithms for an automated and rapid generation of reference data for large areas using UAV imagery. Potential for improving the presented upscaling approach was found particularly in ensuring the spatial and temporal consistency of the two data sources (e.g., co-registration of very high-resolution UAV data and medium resolution satellite data). The increasing availability of publicly available UAV imagery on sharing platforms combined with automated and transferable deep learning-based mapping algorithms will further increase the potential of such multi-scale approaches.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100034"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49737029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Spatial patterns of biomass change across Finland in 2009–2015 2009-2015年芬兰生物量变化空间格局
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-04-01 DOI: 10.1016/j.ophoto.2023.100036
Markus Haakana, Sakari Tuominen, Juha Heikkinen, Mikko Peltoniemi, Aleksi Lehtonen
{"title":"Spatial patterns of biomass change across Finland in 2009–2015","authors":"Markus Haakana,&nbsp;Sakari Tuominen,&nbsp;Juha Heikkinen,&nbsp;Mikko Peltoniemi,&nbsp;Aleksi Lehtonen","doi":"10.1016/j.ophoto.2023.100036","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100036","url":null,"abstract":"<div><p>Forest characteristics vary largely at the regional level and in smaller geographic areas in Finland. The amount of greenhouse gas emissions is related to changes in biomass and the soil type (e.g. upland soils vs. peatlands). In this paper, estimating and explaining spatial patterns of tree biomass change across Finland was the main interest. We analysed biomass changes on different soil and site types between the years 2009 and 2015 using the Finnish multi-source national forest inventory (MS-NFI) raster layers. MS-NFI method is based on combining information from satellite imagery, digital maps and national forest inventory (NFI) field data. Automatic segmentation was used to create silvicultural management and treatment units. An average biomass estimate of the segmented MS-NFI (MS–NFI–seg) map was 73.9 tons ha<sup>−1</sup> compared to the national forest inventory estimate of 76.5 tons ha<sup>−1</sup> in 2015. Forest soil type had a similar effect on average biomass in MS–NFI–seg and NFI data. Despite good regional and country-level results, segmentation narrowed the biomass distributions. Hence, biomass changes on segments can be considered only approximate values; also, those small differences in average biomass may accumulate when map layers from more than one time point are compared. A kappa of 0.44 was achieved for precision when comparing undisturbed and disturbed forest stands in the segmented Global Forest Change data (GFC-seg) and MS–NFI–seg map. Compared to NFI, 69% and 62% of disturbed areas were detected by GFC-seg and MS–NFI–seg, respectively. Spatially accurate map data of biomass changes on forest land improve the ability to suggest optimal management alternatives for any patch of land, e.g. in terms of climate change mitigation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"8 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49723921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of lidar-based gridded DEM uncertainty with varying terrain roughness and point density 随地形粗糙度和点密度变化的激光雷达网格DEM不确定性估计
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-01-01 DOI: 10.1016/j.ophoto.2022.100028
Luyen K. Bui , Craig L. Glennie
{"title":"Estimation of lidar-based gridded DEM uncertainty with varying terrain roughness and point density","authors":"Luyen K. Bui ,&nbsp;Craig L. Glennie","doi":"10.1016/j.ophoto.2022.100028","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100028","url":null,"abstract":"<div><p>Light detection and ranging (lidar) scanning systems can be used to provide a point cloud with high quality and point density. Gridded digital elevation models (DEMs) interpolated from laser scanning point clouds are widely used due to their convenience, however, DEM uncertainty is rarely provided. This paper proposes an end-to-end workflow to quantify the uncertainty (i.e., standard deviation) of a gridded lidar-derived DEM. A benefit of the proposed approach is that it does not require independent validation data measured by alternative means. The input point cloud requires per point uncertainty which is derived from lidar system observational uncertainty. The propagated uncertainty caused by interpolation is then derived by the general law of propagation of variances (GLOPOV) with simultaneous consideration of both horizontal and vertical point cloud uncertainties. Finally, the interpolated uncertainty is then scaled by point density and a measure of terrain roughness to arrive at the final gridded DEM uncertainty. The proposed approach is tested with two lidar datasets measured in Waikoloa, Hawaii, and Sitka, Alaska. Triangulated irregular network (TIN) interpolation is chosen as the representative gridding approach. The results indicate estimated terrain roughness/point density scale factors ranging between 1 (in flat areas) and 7.6 (in high roughness areas), with a mean value of 2.3 for the Waikoloa dataset and between 1 and 9.2 with a mean value of 1.2 for the Sitka dataset. As a result, the final gridded DEM uncertainties are estimated between 0.059 m and 0.677 m with a mean value of 0.164 m for the Waikoloa dataset and between 0.059 m and 1.723 m with a mean value of 0.097 m for the Sitka dataset.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"7 ","pages":"Article 100028"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-camera IMU angular data for orthophoto projection in underwater photogrammetry 水下摄影测量中用于正射影投影的相机内IMU角度数据
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-01-01 DOI: 10.1016/j.ophoto.2022.100027
Erica Nocerino , Fabio Menna
{"title":"In-camera IMU angular data for orthophoto projection in underwater photogrammetry","authors":"Erica Nocerino ,&nbsp;Fabio Menna","doi":"10.1016/j.ophoto.2022.100027","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100027","url":null,"abstract":"<div><p>Among photogrammetric products, orthophotos are probably the most versatile and widely used in many fields of application. In the last years, coupled with the spread of semi-automated survey and processing approaches based on photogrammetry, orthophotos have become almost a standard for monitoring the underwater environment. If on land the definition of the reference coordinate system and projection plane for the orthophoto generation is trivial, underwater it may represent a challenge. In this paper, we address the issue of defining the vertical direction and resulting horizontal plane (levelling) for the differential ortho rectification. We propose a non-invasive, contactless method based on roll and pitch angular data provided by in-camera IMU sensors and embedded in the Exif metadata of JPEG and raw image files. We show how our approach can be seamlessly integrated into automatic SfM/MVS pipelines, provide the mathematical background, and showcase real-world applications results in an underwater monitoring project. The results illustrate the effectiveness of the proposed method and, for the first time, provide a metric evaluation of the definition of the vertical direction with low-cost sensors enclosed in digital cameras directly underwater.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"7 ","pages":"Article 100027"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Model-based constraints for trajectory determination of quad-copters: Design, calibration & merits for direct orientation 基于模型的四旋翼飞行器轨迹确定约束:直接定向的设计、标定与优点
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-01-01 DOI: 10.1016/j.ophoto.2023.100030
Kenneth Joseph Paul, Davide Antonio Cucci, Jan Skaloud
{"title":"Model-based constraints for trajectory determination of quad-copters: Design, calibration & merits for direct orientation","authors":"Kenneth Joseph Paul,&nbsp;Davide Antonio Cucci,&nbsp;Jan Skaloud","doi":"10.1016/j.ophoto.2023.100030","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100030","url":null,"abstract":"<div><p>This paper proposes a novel method to improve georeferencing of airborne laser scanning by improved trajectory estimation using Vehicle Dynamic Model. In Vehicle Dynamic Model (VDM), the relationship between the dynamics of the platform and control inputs is used as additional observations for sensor fusion. This relationship is available for most platforms and can be used without the need for additional hardware. However, this relationship is modeled using parameters that are a priori unknown. The proposed in-flight calibration methodology can achieve less than 2% error in the estimated model parameters compared to the values used in simulation. The effect of Inertial Measurement Unit (IMU) noise on the accuracy of airborne laser scanning is further investigated to demonstrate the reduction in the position error of georeferenced points when VDM measurements are used. The results are evaluated through a Monte-Carlo simulation involving an open-source autopilot. The reduction in the error of the estimated attitude due to vehicle modeling increases with the higher intensity of time-correlated IMU noise. Using a higher quality inertial sensor does not lead to an improvement in the position error of georeferenced points when VDM measurements are employed; however, a lower quality Inertial Measurement Unit, such as those on an autopilot, shows a 33% and 46% reduction in the mean and standard deviation of the position error of the georeferenced points, respectively.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"7 ","pages":"Article 100030"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Observation distribution modelling and closed-from precision estimation of scanned 2D geometric features for network design 面向网络设计的二维扫描几何特征观测分布建模及闭源精度估计
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2022-12-01 DOI: 10.1016/j.ophoto.2022.100022
D.D. Lichti , K. Pexman , T.O. Chan
{"title":"Observation distribution modelling and closed-from precision estimation of scanned 2D geometric features for network design","authors":"D.D. Lichti ,&nbsp;K. Pexman ,&nbsp;T.O. Chan","doi":"10.1016/j.ophoto.2022.100022","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100022","url":null,"abstract":"<div><p>Geometric features such as cylinders and planes are important objects of interest in terrestrial laser scanner surveys of complex scenes. The quality of the objects modelled from the laser scanner data is a function of many variables and geometric network design plays a key role in maximizing precision. The expected precision can be predicted at the planning stage from simulations of the environment to be scanned. However, this practice can incur a high computational load, even if performed in 2D rather than in 3D. In this paper, a closed-form solution to estimate geometric object precision is proposed as an efficient first order network design tool. It models the laser scanner measurement process with an observation distribution function that is introduced into the least-squares normal equations. Parameter precision is evaluated directly by solving a few (three to six) integrals and inverting the normal equations matrix. The method is presented for two cases of a circle lying in the horizontal plane and a 2D line scanned from a single location. Both a simplified circle model and a more general circle model are explored. The method is then extended using the summation of normals method to allow precision estimation from the combination of multiple scans from different locations. Results from many real datasets, 95 circles and 30 lines, show that the distributions of the range observations and derived Cartesian coordinates follow model predictions. Moreover, results demonstrate that the method can predict circle parameter standard deviations within 4%–6% of the experimental values. The agreement is at the 10% level for a very specific case due to inherent high parameter correlation. The agreement of line parameter standard deviations is much greater, approximately 0.1%. The results show the method can be a valuable tool to predict feature quality with minimal computational requirements. The method is beneficial to not only laser scanner network design but could also be to instantaneous 2D map construction performed for SLAM-based surveys.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"6 ","pages":"Article 100022"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000114/pdfft?md5=86ec8df23136acca41e837892b406b3b&pid=1-s2.0-S2667393222000114-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91774280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信