ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
Spectral Profile Partial Least-Squares (SP-PLS): Local multivariate pansharpening on spectral profiles 光谱轮廓偏最小二乘(SP-PLS):光谱轮廓的局部多元泛锐化
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-10-27 DOI: 10.1016/j.ophoto.2023.100049
Tuomas Sihvonen, Zina-Sabrina Duma, Heikki Haario, Satu-Pia Reinikainen
{"title":"Spectral Profile Partial Least-Squares (SP-PLS): Local multivariate pansharpening on spectral profiles","authors":"Tuomas Sihvonen,&nbsp;Zina-Sabrina Duma,&nbsp;Heikki Haario,&nbsp;Satu-Pia Reinikainen","doi":"10.1016/j.ophoto.2023.100049","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100049","url":null,"abstract":"<div><p>The compatibility of multispectral (MS) pansharpening algorithms with hyperspectral (HS) data is limited. With the recent development in HS satellites, there is a need for methods that can provide high spatial and spectral fidelity in both HS and MS scenarios.</p><p>The present article presents a fast pansharpening method, based on the division of similar hyperspectral data in spectral subgroups using k-means clustering and Spectral Angle Mapper (SAM) profiling. Local Partial Least-Square (PLS) models are calibrated for each spectral subgroup against the respective pixels of the panchromatic image. The models are inverted to retrieve high-resolution pansharpened images. The method is tested against different methods that are able to handle both MS and HS pansharpening and assessed using reduced- and full-resolution evaluation methodologies. Based on a statistical multivariate approach, the proposed method is able to render uncertainty maps for spectral or spatial fidelity - functionality not reported in any other pansharpening study.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"10 ","pages":"Article 100049"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000200/pdfft?md5=8601d34da365ecdb3113dcf7bf967e02&pid=1-s2.0-S2667393223000200-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92073808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pol-InSAR-Island - A benchmark dataset for multi-frequency Pol-InSAR data land cover classification Pol-InSAR- island -多频Pol-InSAR数据土地覆盖分类的基准数据集
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-10-13 DOI: 10.1016/j.ophoto.2023.100047
Sylvia Hochstuhl , Niklas Pfeffer , Antje Thiele , Stefan Hinz , Joel Amao-Oliva , Rolf Scheiber , Andreas Reigber , Holger Dirks
{"title":"Pol-InSAR-Island - A benchmark dataset for multi-frequency Pol-InSAR data land cover classification","authors":"Sylvia Hochstuhl ,&nbsp;Niklas Pfeffer ,&nbsp;Antje Thiele ,&nbsp;Stefan Hinz ,&nbsp;Joel Amao-Oliva ,&nbsp;Rolf Scheiber ,&nbsp;Andreas Reigber ,&nbsp;Holger Dirks","doi":"10.1016/j.ophoto.2023.100047","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100047","url":null,"abstract":"<div><p>This paper presents Pol-InSAR-Island, the first publicly available multi-frequency Polarimetric Interferometric Synthetic Aperture Radar (Pol-InSAR) dataset labeled with detailed land cover classes, which serves as a challenging benchmark dataset for land cover classification. In recent years, machine learning has become a powerful tool for remote sensing image analysis. While there are numerous large-scale benchmark datasets for training and evaluating machine learning models for the analysis of optical data, the availability of labeled SAR or, more specifically, Pol-InSAR data is very limited. The lack of labeled data for training, as well as for testing and comparing different approaches, hinders the rapid development of machine learning algorithms for Pol-InSAR image analysis. The Pol-InSAR-Island benchmark dataset presented in this paper aims to fill this gap. The dataset consists of Pol-InSAR data acquired in S- and L-band by DLR's airborne F-SAR system over the East Frisian island Baltrum. The interferometric image pairs are the result of a repeat-pass measurement with a time offset of several minutes. The image data are given as 6 × 6 coherency matrices in ground range on a 1 m × 1m grid. Pixel-accurate class labels, consisting of 12 different land cover classes, are generated in a semi-automatic process based on an existing biotope type map and visual interpretation of SAR and optical images. Fixed training and test subsets are defined to ensure the comparability of different approaches trained and tested prospectively on the Pol-InSAR-Island dataset. In addition to the dataset, results of supervised Wishart and Random Forest classifiers that achieve mean Intersection-over-Union scores between 24% and 67% are provided to serve as a baseline for future work. The dataset is provided via KITopenData: <span>https://doi.org/10.35097/1700</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"10 ","pages":"Article 100047"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49739522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic labelling for semantic segmentation of VHR satellite images: Application of airborne laser scanner data and object-based image analysis VHR卫星图像语义分割的自动标注:机载激光扫描器数据和基于对象的图像分析的应用
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100046
Kirsi Karila, Leena Matikainen, Mika Karjalainen, Eetu Puttonen, Yuwei Chen, Juha Hyyppä
{"title":"Automatic labelling for semantic segmentation of VHR satellite images: Application of airborne laser scanner data and object-based image analysis","authors":"Kirsi Karila,&nbsp;Leena Matikainen,&nbsp;Mika Karjalainen,&nbsp;Eetu Puttonen,&nbsp;Yuwei Chen,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100046","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100046","url":null,"abstract":"<div><p>The application of deep learning methods to remote sensing data has produced good results in recent studies. A promising application area is automatic land cover classification (semantic segmentation) from very high-resolution satellite imagery. However, the deep learning methods require large, labelled training datasets that are suitable for the study area. Map data can be used as training data, but it is often insufficiently detailed for very high-resolution satellite imagery. National airborne laser scanner (lidar) datasets provide additional details and are available in many countries. Successful land cover classifications from lidar datasets have been reached, e.g., by object-based image analysis. In the present study, we investigated the feasibility of using airborne laser scanner data and object-based image analysis to automatically generate labelled training data for a deep neural network -based land cover classification of a VHR satellite image. Input data for the object-based classification included digital surface models, intensity and pulse information derived from the lidar data. The resulting land cover classification was then utilized as training data for deep learning. A state-of-the-art deep learning architecture, UnetFormer, was trained and applied to the land cover classification of a WorldView-3 stereo dataset. For the semantic segmentation, three different input data composites were produced using the red, green, blue, NIR and digital surface model bands derived from the satellite data. The quality of the generated training data and the semantic segmentation results was estimated using an independent test set of ground truth points. The results show that final satellite image classification accuracy (94–96%) close to the training data accuracy (97%) was obtained. It was also demonstrated that the resulting maps could be used for land cover change detection.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100046"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49738084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data 高密度近距离多光谱激光扫描数据的单树分割和树种分类
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100039
Aada Hakula , Lassi Ruoppa , Matti Lehtomäki , Xiaowei Yu , Antero Kukko , Harri Kaartinen , Josef Taher , Leena Matikainen , Eric Hyyppä , Ville Luoma , Markus Holopainen , Ville Kankare , Juha Hyyppä
{"title":"Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data","authors":"Aada Hakula ,&nbsp;Lassi Ruoppa ,&nbsp;Matti Lehtomäki ,&nbsp;Xiaowei Yu ,&nbsp;Antero Kukko ,&nbsp;Harri Kaartinen ,&nbsp;Josef Taher ,&nbsp;Leena Matikainen ,&nbsp;Eric Hyyppä ,&nbsp;Ville Luoma ,&nbsp;Markus Holopainen ,&nbsp;Ville Kankare ,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100039","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100039","url":null,"abstract":"<div><p>Tree species characterise biodiversity, health, economic potential, and resilience of an ecosystem, for example. Tree species classification based on remote sensing data, however, is known to be a challenging task. In this paper, we study for the first time the feasibility of tree species classification using high-density point clouds collected with an airborne close-range multispectral laser scanning system – a technique that has previously proved to be capable of providing stem curve and volume accurately and rapidly for standing trees. To this end, we carried out laser scanning measurements from a helicopter on 53 forest sample plots, each with a size of 32 m × 32 m. The plots covered approximately 5500 trees in total, including Scots pine (<em>Pinus sylvestris</em> L.), Norway spruce (<em>Picea abies</em> (L.) H.Karst.), and deciduous trees such as Downy birch (<em>Betula pubescens</em> Ehrh.) and Silverbirch (<em>Betula pendula</em> Roth). The multispectral laser scanning system consisted of integrated Riegl VUX-1HA, miniVUX-3UAV, and VQ-840-G scanners (Riegl GmbH, Austria) operating at wavelengths of 1550 nm, 905 nm, and 532 nm, respectively. A new approach, <em>layer-by-layer segmentation</em>, was developed for individual tree detection and segmentation from the dense point cloud data. After individual tree segmentation, 249 features were computed for tree species classification, which was tested with approximately 3000 trees. The features described the point cloud geometry as well as single-channel and multi-channel reflectance metrics. Both feature selection and the tree species classification were conducted using the random forest method. Using the layer-by-layer segmentation algorithm, trees in the dominant and co-dominant categories were found with detection rates of 89.5% and 77.9%, respectively, whereas suppressed trees were detected with a success rate of 15.2%–42.3%, clearly improving upon the standard watershed segmentation. The overall accuracy of the tree species classification was 73.1% when using geometric features from the 1550 nm scanner data and 86.6% when combining the geometric features with reflectance information of the 1550 nm data. The use of multispectral reflectance and geometric features improved the overall classification accuracy up to 90.8%. Classification accuracies were as high as 92.7% and 93.7% for dominant and co-dominant trees, respectively.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automated pipeline reconstruction using deep learning & instance segmentation 使用深度学习和实例分割的自动管道重建
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100043
Lukas Hart , Stefan Knoblach , Michael Möser
{"title":"Automated pipeline reconstruction using deep learning & instance segmentation","authors":"Lukas Hart ,&nbsp;Stefan Knoblach ,&nbsp;Michael Möser","doi":"10.1016/j.ophoto.2023.100043","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100043","url":null,"abstract":"<div><p>BIM is a powerful tool for the construction industry as well as for various other industries, so that its use has increased massively in recent years. Laser scanners are usually used for the measurement, which, in addition to the high acquisition costs, also cause problems on reflective surfaces. The use of photogrammetric techniques for BIM in industrial plants, on the other hand, is less widespread and less automated. CAD software (for point cloud evaluation) contains at best automated reconstruction algorithms for pipes. Fittings, flanges or elbows require a manual reconstruction. We present a method for automated processing of photogrammetric images for modeling pipelines in industrial plants. For this purpose we use instance segmentation and reconstruct the components of the pipeline directly based on the edges of the segmented objects in the images. Hardware costs can be kept low by using photogrammetry instead of laser scanning. Besides the autmatic extraction and reconstruction of pipes, we have also implemented this for elbows and flanges. For object recognition, we fine-tuned different instance segmentation models using our own training data, while also testing various data augmentation techniques. The average precision varies depending on the object type. The best results were achieved with Mask R–CNN. Here, the average precision was about 40%. The results of the automated reconstruction were examined with regard to the accuracy on a test object in the laboratory. The deviations from the reference geometry were in the range of a few millimeters and were comparable to manual reconstruction. In addition, further tests were carried out with images from a plant. Provided that the objects were correctly and completely recognized, a satisfactory reconstruction is possible with the help of our method.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49726180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep unsupervised learning for 3D ALS point clouds change detection 三维ALS点云变化检测的深度无监督学习
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100044
Iris de Gélis , Sudipan Saha , Muhammad Shahzad , Thomas Corpetti , Sébastien Lefèvre , Xiao Xiang Zhu
{"title":"Deep unsupervised learning for 3D ALS point clouds change detection","authors":"Iris de Gélis ,&nbsp;Sudipan Saha ,&nbsp;Muhammad Shahzad ,&nbsp;Thomas Corpetti ,&nbsp;Sébastien Lefèvre ,&nbsp;Xiao Xiang Zhu","doi":"10.1016/j.ophoto.2023.100044","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100044","url":null,"abstract":"<div><p>Change detection from traditional 2D optical images has limited capability to model the changes in the height or shape of objects. Change detection using 3D point cloud from photogrammetry or LiDAR surveying can fill this gap by providing critical depth information. While most existing machine learning based 3D point cloud change detection methods are supervised, they severely depend on the availability of annotated training data, which is in practice a critical point. To circumnavigate this dependence, we propose an unsupervised 3D point cloud change detection method mainly based on self-supervised learning using deep clustering and contrastive learning. The proposed method also relies on an adaptation of deep change vector analysis to 3D point cloud via nearest point comparison. Experiments conducted on an aerial LiDAR survey dataset show that the proposed method obtains higher performance in comparison to the traditional unsupervised methods, with a gain of about 9% in mean accuracy (to reach more than 85%). Thus, it appears to be a relevant choice in scenario where prior knowledge (labels) is not ensured. The code will be made available at <span>https://github.com/IdeGelis/torch-points3d-SSL-DCVA</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49726182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Instance segmentation of individual tree crowns with YOLOv5: A comparison of approaches using the ForInstance benchmark LiDAR dataset 使用YOLOv5对单个树冠进行实例分割:使用ForInstance基准激光雷达数据集的方法比较
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100045
Adrian Straker , Stefano Puliti , Johannes Breidenbach , Christoph Kleinn , Grant Pearse , Rasmus Astrup , Paul Magdon
{"title":"Instance segmentation of individual tree crowns with YOLOv5: A comparison of approaches using the ForInstance benchmark LiDAR dataset","authors":"Adrian Straker ,&nbsp;Stefano Puliti ,&nbsp;Johannes Breidenbach ,&nbsp;Christoph Kleinn ,&nbsp;Grant Pearse ,&nbsp;Rasmus Astrup ,&nbsp;Paul Magdon","doi":"10.1016/j.ophoto.2023.100045","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100045","url":null,"abstract":"<div><p>Fine-grained information on the level of individual trees constitute key components for forest observation enabling forest management practices tackling the effects of climate change and the loss of biodiversity in forest ecosystems. Such information on individual tree crowns (ITC's) can be derived from the application of ITC segmentation approaches, which utilize remotely sensed data. However, many ITC segmentation approaches require prior knowledge about forest characteristics, which is difficult to obtain for parameterization. This can be avoided by the adoption of data-driven, automated workflows based on convolutional neural networks (CNN). To contribute to the advancements of efficient ITC segmentation approaches, we present a novel ITC segmentation approach based on the YOLOv5 CNN. We analyzed the performance of this approach on a comprehensive international unmanned aerial laser scanning (UAV-LS) dataset (ForInstance), which covers a wide range of forest types. The ForInstance dataset consists of 4192 individually annotated trees in high-density point clouds with point densities ranging from 498 to 9529 points m-2 collected across 80 sites. The original dataset was split into 70% for training and validation and 30% for model performance assessment (test data). For the best performing model, we observed a F1-score of 0.74 for ITC segmentation and a tree detection rate (DET %) of 64% in the test data. This model outperformed an ITC segmentation approach, which requires prior knowledge about forest characteristics, by 41% and 33% for F1-score and DET %, respectively. Furthermore, we tested the effects of reduced point densities (498, 50 and 10 points per m-2) on ITC segmentation performance. The YOLO model exhibited promising F1-scores of 0.69 and 0.62 even at point densities of 50 and 10 points m-2, respectively, which were between 27% and 34% better than the ITC approach that requires prior knowledge.</p><p>Furthermore, the areas of ITC segments resulting from the application of the best performing YOLO model were close to the reference areas (RMSE = 3.19 m-2), suggesting that the YOLO-derived ITC segments can be used to derive information on ITC level.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100045"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49726432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A step towards inter-operable Unmanned Aerial Vehicles (UAV) based phenotyping; A case study demonstrating a rapid, quantitative approach to standardize image acquisition and check quality of acquired images 迈向基于互操作无人机(UAV)表型的一步;一个案例研究演示了一种快速、定量的方法来标准化图像采集和检查所获取图像的质量
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100042
Gattu Priyanka , Sunita Choudhary , Krithika Anbazhagan , Dharavath Naresh , Rekha Baddam , Jan Jarolimek , Yogesh Parnandi , P. Rajalakshmi , Jana Kholova
{"title":"A step towards inter-operable Unmanned Aerial Vehicles (UAV) based phenotyping; A case study demonstrating a rapid, quantitative approach to standardize image acquisition and check quality of acquired images","authors":"Gattu Priyanka ,&nbsp;Sunita Choudhary ,&nbsp;Krithika Anbazhagan ,&nbsp;Dharavath Naresh ,&nbsp;Rekha Baddam ,&nbsp;Jan Jarolimek ,&nbsp;Yogesh Parnandi ,&nbsp;P. Rajalakshmi ,&nbsp;Jana Kholova","doi":"10.1016/j.ophoto.2023.100042","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100042","url":null,"abstract":"<div><p>The Unmanned aerial vehicles (UAVs) - based imaging is being intensively explored for precise crop evaluation. Various optical sensors, such as RGB, multi-spectral, and hyper-spectral cameras, can be used for this purpose. Consistent image quality is crucial for accurate plant trait prediction (i.e., phenotyping). However, achieving consistent image quality can pose a challenge as image qualities can be affected by i) UAV and camera technical settings, ii) environment, and iii) crop and field characters which are not always under the direct control of the UAV operator. Therefore, capturing the images requires the establishment of robust protocols to acquire images of suitable quality, and there is a lack of systematic studies on this topic in the public domain. Therefore, in this case study, we present an approach (protocols, tools, and analytics) that addressed this particular gap in our specific context. In our case, we had the drone (DJI Inspire 1 Raw) available, equipped with RGB camera (DJI Zenmuse x5), which needed to be standardized for phenotyping of the annual crops’ canopy cover (CC). To achieve this, we have taken 69 flights in Hyderabad, India, on 5 different cereal and legume crops (<span><math><mo>∼</mo><mn>300</mn></math></span> genotypes) in different vegetative growth stages with different combinations of technical setups of UAV and camera and across the environmental conditions typical for that region. For each crop-genotype combination, the ground truth (for CC) was rapidly estimated using an automated phenomic platform (LeasyScan phenomics platform, ICRISAT). This data-set enabled us to 1) quantify the sensitivity of image acquisition to the main technical, environmental and crop-related factors and this analysis was then used to develop the image acquisition protocols specific to our UAV-camera system. This process was significantly eased by automated ground-truth collection. We also 2) identified the important image quality indicators that integrated the effects of 1) and these indicators were used to develop the quality control protocols for inspecting the images post accquisition. To ease 2), we present a web-based application available at (<span>https://github.com/GattuPriyanka/Framework-for-UAV-image-quality.git</span><svg><path></path></svg>) which automatically calculates these key image quality indicators.</p><p>Overall, we present a methodology for establishing the image acquisition protocol and quality check for obtained images, enabling a high accuracy of plant trait inference. This methodology was demonstrated on a particular UAV-camera set-up and focused on a specific crop trait (CC) at the ICRISAT research station (Hyderabad, India). We envision that, in the future, a similar image quality control system could facilitate the interoperability of data from various UAV-imaging set-ups.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100042"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49725864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Branch information extraction from Norway spruce using handheld laser scanning point clouds in Nordic forests 用手持式激光扫描北欧森林点云提取挪威云杉的分支信息
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100040
Olli Winberg , Jiri Pyörälä , Xiaowei Yu , Harri Kaartinen , Antero Kukko , Markus Holopainen , Johan Holmgren , Matti Lehtomäki , Juha Hyyppä
{"title":"Branch information extraction from Norway spruce using handheld laser scanning point clouds in Nordic forests","authors":"Olli Winberg ,&nbsp;Jiri Pyörälä ,&nbsp;Xiaowei Yu ,&nbsp;Harri Kaartinen ,&nbsp;Antero Kukko ,&nbsp;Markus Holopainen ,&nbsp;Johan Holmgren ,&nbsp;Matti Lehtomäki ,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2023.100040","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100040","url":null,"abstract":"<div><p>We showed that a mobile handheld laser scanner (HHLS) provides useful features concerning the wood quality-influencing external structures of trees. When linked with wood properties measured at a sawmill utilizing state-of-the-art X-ray scanners, these data enable the training of various wood quality models for use in targeting and planning future wood procurement. A total of 457 Norway spruce sample trees (<em>Picea abies</em> (L.) H. Karst.) from 13 spruce-dominated stands in southeastern Finland were used in the study. All test sites were recorded with a ZEB Horizon HHLS, and the sample trees were tracked to a sawmill and subjected to X-rays. Two branch extraction techniques were applied to the HHLS point clouds: 1) a method developed in this study that was based on the density-based spatial clustering of applications with noise (DBSCAN) and 2) segmentation-based quantitative structure model (treeQSM). On average, the treeQSM method detected 46% more branches per tree than the DBSCAN did. However, compared with the X-rayed references, some of the branches detected by the treeQSM may either be false positives or so small in size that the X-rays are unable to detect them as knots, as the method overestimated the whorl count by 19% when compared with the X-rays. On the other hand, the DBSCAN method only detected larger branches and showed a −11% bias in whorl count. Overall, the DBSCAN underestimated knot volumes within trees by 6%, while the treeQSM overestimated them by 25%. When we input the HHLS features into a Random Forest model, the knottiness variables measured at the sawmill were predicted with R<sup>2</sup>s of 0.47–0.64. The results were comparable with previous results derived with the static terrestrial laser scanners. The obtained stem branching data are relevant for predicting wood quality attributes but do not provide data that are directly comparable with the X-ray features. Future work should combine terrestrial point clouds with dense above-canopy point clouds to overcome the limitations related to vertical coverage.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-modal image matching to colorize a SLAM based point cloud with arbitrary data from a thermal camera 利用热像仪的任意数据对基于SLAM的点云进行多模态图像匹配
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2023-08-01 DOI: 10.1016/j.ophoto.2023.100041
Melanie Elias , Alexandra Weitkamp , Anette Eltner
{"title":"Multi-modal image matching to colorize a SLAM based point cloud with arbitrary data from a thermal camera","authors":"Melanie Elias ,&nbsp;Alexandra Weitkamp ,&nbsp;Anette Eltner","doi":"10.1016/j.ophoto.2023.100041","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100041","url":null,"abstract":"<div><p>Thermal mapping of buildings can be one approach to assess the insulation, which is important in regard to upgrade buildings to increase energy efficiency and for climate change adaptation. Personal laser scanning (PLS) is a fast and flexible option that has become increasingly popular to efficiently map building facades. However, some measurement systems do not include sufficient colorization of the point cloud. In order to detect, map and reference any damages to building facades, it is of great interest to transfer images from RGB and thermal infrared (TIR) cameras to the point cloud. This study aims to answer the research question if a flexible tool can be developed, which enable such measurements with high spatial resolution and flexibility. Therefore, an image-to-geometry registration approach for rendered point clouds is combined with a deep learning (DL)-based image feature matcher to estimate the camera pose of arbitrary images in relation to the geometry, i.e. the point cloud, to map color information. We developed a research design for multi-modal image matching to investigate the alignment of RGB and TIR camera images to a PLS point cloud with intensity information using calibrated and un-calibrated images. The accuracies of the estimated pose parameters reveal the best performance of the registration for pre-calibrated, i.e. undistorted, RGB camera images. The alignment of un-calibrated RGB and TIR camera images to a point cloud is possible if sufficient and well-distributed 2D-3D feature matches between image and point cloud are available. Our workflow enables the colorization of point clouds with high accuracy using images with very different radiometric characteristics and image resolutions. Only a rough approximation of the camera pose is required and hence the approach reliefs strict sensor synchronization requirements.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"9 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49753541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信