{"title":"Low-cost real-time aerial object detection and GPS location tracking pipeline","authors":"Allan Lago, Sahaj Patel, Aditya Singh","doi":"10.1016/j.ophoto.2024.100069","DOIUrl":"10.1016/j.ophoto.2024.100069","url":null,"abstract":"<div><p>Real-time object detection and tracking is an active area of aerial remote sensing research that enables many environmental and ecological monitoring and preservation applications. Despite the development of several solutions tailored for these specific applications, trade-offs between cost efficiency and feature richness persist. This paper proposes a lightweight, low-cost, and modular approach to real-time object detection and instance tracking, enabling a wide gamut of use cases. By integrating real-time object detection models with affordable embedded hardware, we present a system that uses image metadata to perform geolocation on detected objects, enabling real-time applications due to minimal computational overhead. This algorithm generates cleaner ’areas of interest’ based on geolocated detections filtered by a clustering algorithm to remove false positives. In our findings, this proved a viable solution with real-time processing speeds and GPS positioning accuracy within a meter. While there is room for improvement, our proposed pipeline represents a significant step forward in lowering the costs involved with applying computer vision to conservation applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100069"},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000139/pdfft?md5=3e4423d991dff13ba71a9c0bdb66c837&pid=1-s2.0-S2667393224000139-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141274879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Hermann , M. Weinmann , F. Nex , E.K. Stathopoulou , F. Remondino , B. Jutzi , B. Ruf
{"title":"Depth estimation and 3D reconstruction from UAV-borne imagery: Evaluation on the UseGeo dataset","authors":"M. Hermann , M. Weinmann , F. Nex , E.K. Stathopoulou , F. Remondino , B. Jutzi , B. Ruf","doi":"10.1016/j.ophoto.2024.100065","DOIUrl":"10.1016/j.ophoto.2024.100065","url":null,"abstract":"<div><p>Depth estimation and 3D model reconstruction from aerial imagery is an important task in photogrammetry, remote sensing, and computer vision. To compare the performance of different image-based approaches, this study presents a benchmark for UAV-based aerial imagery using the UseGeo dataset. The contributions include the release of various evaluation routines on GitHub, as well as a comprehensive comparison of baseline approaches, such as methods for offline multi-view 3D reconstruction resulting in point clouds and triangle meshes, online multi-view depth estimation, as well as single-image depth estimation using self-supervised deep learning. With the release of our evaluation routines, we aim to provide a universal protocol for the evaluation of depth estimation and 3D reconstruction methods on the UseGeo dataset. The conducted experiments and analyses show that each method excels in a different category: the depth estimation from COLMAP outperforms that of the other approaches, ACMMP achieves the lowest error and highest completeness for point clouds, while OpenMVS produces triangle meshes with the lowest error. Among the online methods for depth estimation, the approach from the Plane-Sweep Library outperforms the FaSS-MVS approach, while the latter achieves the lowest processing time. And even though the particularly challenging nature of the dataset and the small amount of training data leads to a significantly higher error in the results of the self-supervised single-image depth estimation approach, it outperforms all other approaches in terms of processing time and frame rate. In our evaluation, we have also considered modern learning-based approaches that can be used for image-based 3D reconstruction, such as NeRFs. However, due to the significantly lower quality of the resulting 3D models, we have only included a qualitative comparison between NeRF-based and conventional approaches in the scope of this work.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"13 ","pages":"Article 100065"},"PeriodicalIF":0.0,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000085/pdfft?md5=62b1d4520d924b174fc6755a9b752484&pid=1-s2.0-S2667393224000085-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141039465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parzival Borlinghaus , Frederic Tausch , Richard Odemer
{"title":"Natural color dispersion of corbicular pollen limits color-based classification","authors":"Parzival Borlinghaus , Frederic Tausch , Richard Odemer","doi":"10.1016/j.ophoto.2024.100063","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100063","url":null,"abstract":"<div><p>Various methods have been developed to assign pollen to its botanical origin. They range from technically complex approaches to the less precise but sophisticated chromatic assessment, in which the pollen colors are used for identification. However, a common challenge lies in the similarity of colors of pollen from different plant species. The advent of camera-based bee monitoring systems has sparked renewed interest in classifying pollen based on color and offers potential advances for honey bee biomonitoring. Despite the promise of improved sensor accuracy, a critical examination of whether color diversity within a single species may be the primary limiting factor has been lacking. Our comprehensive analysis, which includes over 85,000 corbicular pollen from 30 major pollen species, shows that the average color variation within each species is distinguishable to a human observer, similar to the difference between two dissimilar colors. From today's perspective, the considerable color variation within a single pollen source makes the use of color alone to classify pollen impractical. When picking a single pollen color from the entire dataset, we report a correct pollen type classification rate of 67 %. The accuracy was highly dependent on the type and ranged from 0 % for rare types with common colors to 99 % for distinct colors. The large color dispersion within species highlights the need for complementary methods to improve the accuracy and reliability of color-based pollen identification in biomonitoring applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100063"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000061/pdfft?md5=60851447727c71ddaf821e0054cde41f&pid=1-s2.0-S2667393224000061-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning techniques for hyperspectral image analysis in agriculture: A review","authors":"Mohamed Fadhlallah Guerri , Cosimo Distante , Paolo Spagnolo , Fares Bougourzi , Abdelmalik Taleb-Ahmed","doi":"10.1016/j.ophoto.2024.100062","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100062","url":null,"abstract":"<div><p>In recent years, there has been a growing emphasis on assessing and ensuring the quality of horticultural and agricultural produce. Traditional methods involving field measurements, investigations, and statistical analyses are labour-intensive, time-consuming, and costly. As a solution, Hyperspectral Imaging (HSI) has emerged as a non-destructive and environmentally friendly technology. HSI has gained significant popularity as a new technology, particularly for its promising applications in remote sensing, notably in agriculture. However, classifying HSI data is highly complex because it involves several challenges, such as the excessive redundancy of spectral bands, scarcity of training samples, and the intricate non-linear relationship between spatial positions and spectral bands. Notably, Deep Learning (DL) techniques have demonstrated remarkable efficacy in various HSI analysis tasks, including those within agriculture. As interest continues to surge in leveraging HSI data for agricultural applications through DL approaches, a pressing need exists for a comprehensive survey that can effectively navigate researchers through the significant strides achieved and the future promising research directions in this domain. This literature review diligently compiles, analyzes, and discusses recent endeavours employing DL methodologies. These methodologies encompass a spectrum of approaches, ranging from Autoencoders (AE) to Convolutional Neural Networks (CNN) (in 1D, 2D, and 3D configurations), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Generative Adversarial Networks (GAN), Transfer Learning (TL), Semi-Supervised Learning (SSL), Few-Shot Learning (FSL) and Active Learning (AL). These approaches are tailored to address the unique challenges posed by agricultural HSI analysis. This review evaluates and discusses the performance exhibited by these diverse approaches. To this end, the efficiency of these approaches has been rigorously analyzed and discussed based on the results of the state-of-the-art papers on widely recognized land cover datasets. <span>Github repository</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266739322400005X/pdfft?md5=5a272b7d6066b8efe8bee784c28464f9&pid=1-s2.0-S266739322400005X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140331066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving spatial transferability of deep learning models for small-field crop yield prediction","authors":"Stefan Stiller , Kathrin Grahmann , Gohar Ghazaryan , Masahiro Ryo","doi":"10.1016/j.ophoto.2024.100064","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100064","url":null,"abstract":"<div><p>Predicting crop yield using deep learning (DL) and remote sensing is a promising technique in agriculture. In smallholder agriculture (<2 ha), where 84% of the farms operate globally, it is crucial to build a model that can be useful across several fields (high spatial transferability). However, enhancing spatial model transferability in a small-scale setting faces significant challenges, including spatial autocorrelation, heterogeneity and scale dependence of spatial dynamics, as well as the need to address limited data points. This study aimed to test the hypothesis that spatial cross validation (SCV) is a more suitable model validation practice than random cross validation (RCV) to enhance model transferability for spatial prediction in a small-scale farming setting. We compared the performances of DL models that predict crop yield for several settings including three crop types and two DL architectures based on RCV with and without overlapping samples and SCV. Notably, we conducted model performance tests on external, equally sized fields instead of the field used for training. We used high resolution RGB imagery taken with a drone as input. Our results show that the models using SCV outperformed those using RCV when the models were tested on external fields (on average r = 0.37 for SCV, r = 0.18 for RCV with overlap and r = 0.07 without), even though the models using SCV showed a substantially lower performance for cross validation (CV) than those using RCV (r with SCV and RCV w/o overlap = 0.73 and 0.98/0.73, respectively). The results suggest that RCV leads to over-optimism by overfitting the spatial structure and remembering image-specific information (so called memorization). Our study offers the first empirical evidence in agriculture that SCV is preferable to RCV in small field settings for making DL models more transferable.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100064"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000073/pdfft?md5=50c355dd3d3f1275fbe75dfa9e3ceab5&pid=1-s2.0-S2667393224000073-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140643663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X. Briottet , K. Adeline , T. Bajjouk , V. Carrère , M. Chami , Y. Constans , Y. Derimian , A. Dupiau , M. Dumont , S. Doz , S. Fabre , P.Y. Foucher , H. Herbin , S. Jacquemoud , M. Lang , A. Le Bris , P. Litvinov , S. Loyer , R. Marion , A. Minghelli , B. Cheul
{"title":"End-to-end simulations to optimize imaging spectroscopy mission requirements for seven scientific applications","authors":"X. Briottet , K. Adeline , T. Bajjouk , V. Carrère , M. Chami , Y. Constans , Y. Derimian , A. Dupiau , M. Dumont , S. Doz , S. Fabre , P.Y. Foucher , H. Herbin , S. Jacquemoud , M. Lang , A. Le Bris , P. Litvinov , S. Loyer , R. Marion , A. Minghelli , B. Cheul","doi":"10.1016/j.ophoto.2024.100060","DOIUrl":"10.1016/j.ophoto.2024.100060","url":null,"abstract":"<div><p>CNES is currently carrying out a Phase A study to assess the feasibility of a future hyperspectral imaging sensor (10 m spatial resolution) combined with a panchromatic camera (2.5 m spatial resolution). This mission focuses on both high spatial and spectral resolution requirements, as inherited from previous French studies such as HYPEX, HYPXIM, and BIODIVERSITY. To meet user requirements, cost, and instrument compactness constraints, CNES asked the French hyperspectral Mission Advisory Group (MAG), representing a broad French scientific community, to provide recommendations on spectral sampling, particularly in the Short Wave InfraRed (SWIR) for various applications.</p><p>This paper presents the tests carried out with the aim of defining the optimal spectral sampling and spectral resolution in the SWIR domain for quantitative estimation of physical variables and classification purposes. The targeted applications are geosciences (mineralogy, soil moisture content), forestry (tree species classification, leaf functional traits), coastal and inland waters (bathymetry, water column, bottom classification in shallow water, coastal habitat classification), urban areas (land cover), industrial plumes (aerosols, methane and carbon dioxide), cryosphere (specific surface area, equivalent black carbon concentration), and atmosphere (water vapor, carbon dioxide and aerosols). All the products simulated in this exercise used the same CNES end-to-end processing chain, with realistic instrument parameters, enabling easy comparison between applications. 648 simulations were carried out with different spectral strategies, radiometric calibration performances and signal-to-noise Ratios (SNR): 24 instrument configurations × 25 datasets (22 images + 3 spectral libraries).</p><p>The results show that spectral sampling up to 20 nm in the SWIR range is sufficient for most applications. However, 10 nm spectral sampling is recommended for applications based on specific absorption bands such as mineralogy, industrial plumes or atmospheric gases. In addition, a slight performance loss is generally observed when radiometric calibration accuracy decreases, with a few exceptions in bathymetry and in the cryosphere for which the observed performance is severely degraded. Finally, most applications can be achieved with a <em>realistic</em> SNR, with the exception of bathymetry, shallow water classification, as well as carbon dioxide and methane estimation, which require the <em>optimistic</em> SNR level tested. On the basis of these results, CNES is currently evaluating the best compromise for designing the future hyperspectral sensor to meet the objectives of priority applications.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100060"},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000036/pdfft?md5=a4765581693a72be42a56629872e3511&pid=1-s2.0-S2667393224000036-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140092723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko
{"title":"Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks","authors":"Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko","doi":"10.1016/j.ophoto.2024.100061","DOIUrl":"10.1016/j.ophoto.2024.100061","url":null,"abstract":"<div><p>Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100061"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000048/pdfft?md5=6faf1ff37f867c363f5ed0c6399534c9&pid=1-s2.0-S2667393224000048-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140090915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"10.1016/j.ophoto.2024.100059","url":null,"abstract":"<div><p>Scalable and transferable methods for generating reliable reference data for automated remote sensing approaches are crucial, especially for mapping complex Earth surface processes such as gully erosion in low-populated and inaccessible areas. As an alternative for the labour-intense in-situ authoritative mapping, collaborative approaches enable volunteers to generate redundant independent geoinformation by digitising Earth observation imagery. We face the challenge of mapping the complex gully outlines integrating multi-user contributions of the same gully network. Comparing Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto base maps, we examine the volunteered geographic information process and multi-contribution integration using Kalman filtering and machine learning to segment a gully border in a remote area in northwestern Namibia. The Kalman filtering integrates the different lines finding a smoothed solution, and a Random Forest model is used to identify mapping conditions and terrain features as key predictors for evaluating contributors' digitising quality. Assessing results with expert-based reference data, we identify ten contributions as optimal, yielding root mean square distance values of 19.1 m, 15.9 m and 16.6 m, and variability of 2.0 m, 4.2 m and 3.8 m (root mean square distance standard deviation) for Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto, respectively. Eliminating the lowest performing contributions for Sentinel 2 using a Random Forest regression-based quality indicator improves the accuracy by up to 35% in the root mean square distance compared to a random selection, and up to 54% compared to a supervised remote sensing classification. Results for Sentinel 2 show that low slope, low terrain ruggedness index, and high normalised difference vegetation index values are correlated to high spatial mapping deviations, with Pearson correlation coefficients of −0.61, −0.5, and 0.18, respectively. Our approach is a powerful alternative for authoritative mapping of morphologically complex environmental phenomena and can provide independent reference data for supervised automatic remote sensing analysis.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000024/pdfft?md5=48a1afef19ee80fc26305409481984b5&pid=1-s2.0-S2667393224000024-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139874969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"10.1016/j.ophoto.2024.100058","url":null,"abstract":"<div><p>This paper introduces methods for monitoring rock slope movements in Alpine environments based on terrestrial images. The first method is a photogrammtric point cloud-based deformation analysis, relying on M3C2. Although effective in identifying large changes, the method has a tendency to underestimate smaller-scale movements. A feature-based method is presented to address this limitation, using SIFT features to track keypoints in images from different epochs. These automatically detected 3D vectors offer high spatial density and enable small-scale movement detection in the order of a few millimeters. The results are incorporated into a deformation analysis that allows statistically based conclusions about the ongoing movements. The workflow relies on georegistration using Ground Control Points. To investigate the possibility of avoiding these points, a registration method based on the ICP algorithm and M3C2 is tested. The study utilizes data from an active landslide site at Hochvogel Mountain in the Alps, analyzing changes and deformations from 2018 to 2021, revealing an average motion of 75 mm.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000012/pdfft?md5=5c428099c72948419171303ad7c14d16&pid=1-s2.0-S2667393224000012-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139826629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100058","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139886772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}