Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko
{"title":"Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks","authors":"Mikael Reichler , Josef Taher , Petri Manninen , Harri Kaartinen , Juha Hyyppä , Antero Kukko","doi":"10.1016/j.ophoto.2024.100061","DOIUrl":"10.1016/j.ophoto.2024.100061","url":null,"abstract":"<div><p>Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100061"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000048/pdfft?md5=6faf1ff37f867c363f5ed0c6399534c9&pid=1-s2.0-S2667393224000048-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140090915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo Orti , Katharina Anders , Oluibukun Ajayi , Olaf Bubenzer , Bernhard Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"10.1016/j.ophoto.2024.100059","url":null,"abstract":"<div><p>Scalable and transferable methods for generating reliable reference data for automated remote sensing approaches are crucial, especially for mapping complex Earth surface processes such as gully erosion in low-populated and inaccessible areas. As an alternative for the labour-intense in-situ authoritative mapping, collaborative approaches enable volunteers to generate redundant independent geoinformation by digitising Earth observation imagery. We face the challenge of mapping the complex gully outlines integrating multi-user contributions of the same gully network. Comparing Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto base maps, we examine the volunteered geographic information process and multi-contribution integration using Kalman filtering and machine learning to segment a gully border in a remote area in northwestern Namibia. The Kalman filtering integrates the different lines finding a smoothed solution, and a Random Forest model is used to identify mapping conditions and terrain features as key predictors for evaluating contributors' digitising quality. Assessing results with expert-based reference data, we identify ten contributions as optimal, yielding root mean square distance values of 19.1 m, 15.9 m and 16.6 m, and variability of 2.0 m, 4.2 m and 3.8 m (root mean square distance standard deviation) for Sentinel 2, Bing Aerial, and unoccupied aerial vehicle orthophoto, respectively. Eliminating the lowest performing contributions for Sentinel 2 using a Random Forest regression-based quality indicator improves the accuracy by up to 35% in the root mean square distance compared to a random selection, and up to 54% compared to a supervised remote sensing classification. Results for Sentinel 2 show that low slope, low terrain ruggedness index, and high normalised difference vegetation index values are correlated to high spatial mapping deviations, with Pearson correlation coefficients of −0.61, −0.5, and 0.18, respectively. Our approach is a powerful alternative for authoritative mapping of morphologically complex environmental phenomena and can provide independent reference data for supervised automatic remote sensing analysis.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000024/pdfft?md5=48a1afef19ee80fc26305409481984b5&pid=1-s2.0-S2667393224000024-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139874969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks , Uwe Stilla , Ludwig Hoegner , Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"10.1016/j.ophoto.2024.100058","url":null,"abstract":"<div><p>This paper introduces methods for monitoring rock slope movements in Alpine environments based on terrestrial images. The first method is a photogrammtric point cloud-based deformation analysis, relying on M3C2. Although effective in identifying large changes, the method has a tendency to underestimate smaller-scale movements. A feature-based method is presented to address this limitation, using SIFT features to track keypoints in images from different epochs. These automatically detected 3D vectors offer high spatial density and enable small-scale movement detection in the order of a few millimeters. The results are incorporated into a deformation analysis that allows statistically based conclusions about the ongoing movements. The workflow relies on georegistration using Ground Control Points. To investigate the possibility of avoiding these points, a registration method based on the ICP algorithm and M3C2 is tested. The study utilizes data from an active landslide site at Hochvogel Mountain in the Alps, analyzing changes and deformations from 2018 to 2021, revealing an average motion of 75 mm.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393224000012/pdfft?md5=5c428099c72948419171303ad7c14d16&pid=1-s2.0-S2667393224000012-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139826629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst
{"title":"Photogrammetric rockfall monitoring in Alpine environments using M3C2 and tracked motion vectors","authors":"Lukas Lucks, Uwe Stilla, L. Hoegner, Christoph Holst","doi":"10.1016/j.ophoto.2024.100058","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100058","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"19 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139886772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Vallejo, K. Anders, O. Ajayi, Olaf Bubenzer, B. Höfle
{"title":"Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning","authors":"Miguel Vallejo, K. Anders, O. Ajayi, Olaf Bubenzer, B. Höfle","doi":"10.1016/j.ophoto.2024.100059","DOIUrl":"https://doi.org/10.1016/j.ophoto.2024.100059","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"83 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139815198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyriaki Mouzakidou, Aurélien Brun, Davide A. Cucci, Jan Skaloud
{"title":"Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment","authors":"Kyriaki Mouzakidou, Aurélien Brun, Davide A. Cucci, Jan Skaloud","doi":"10.1016/j.ophoto.2023.100057","DOIUrl":"10.1016/j.ophoto.2023.100057","url":null,"abstract":"<div><p><em>Tightly-coupled</em> sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100057"},"PeriodicalIF":0.0,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000285/pdfft?md5=0f7ab041b690c142ba3b35d6019ecf11&pid=1-s2.0-S2667393223000285-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139632413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting the Past: A comparative study for semantic segmentation of historical images of Adelaide Island using U-nets","authors":"Felix Dahle, Roderik Lindenbergh, Bert Wouters","doi":"10.1016/j.ophoto.2023.100056","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100056","url":null,"abstract":"<div><p>The TriMetrogon Aerial (TMA) archive is an archive of historical images of Antarctica taken by the US Navy between 1940 and 2000 with analogue cameras. The analysis of such historic data can give a view of Antarctica's glaciers predating modern satellite imagery and provide unique insights into the long-term impact of changing climate conditions with essential validation data for climate modelling. However, the lack of semantic information for these images presents a challenge for large-scale computer-driven analysis.</p><p>Such information can be added to the data using semantic segmentation, but traditional algorithms fail on these scanned historical grayscale images, due to varying image quality, lack of colour information and artefacts in the images. To address this, we present a deep-learning-based U-net workflow. Our approach includes creating training data by pre-processing and labelling the raw images. Furthermore, different versions of the U-net are trained to optimize its hyperparameters and augmentation methods. With the optimal hyper-parameters and augmentation methods, a final model has been trained for a use-case to segment 118 images covering Adelaide Island.</p><p>We tested our approach by segmenting challenging historical images using a U-net model with just 80 training images, achieving an accuracy of 73% for 20 validation images. While no test data is available for our use case, a visual examination of the segmented images shows that our method performs effectively.</p><p>The comparison of the hyper-parameters and augmentation methods provides directions for training other U-net-based models so that the presented workflow can be used to segment other archives with historical imagery. Additionally, the labelled training data and the segmented images of the test are publicly available at <span>https://github.com/fdahle/antarctic_segmentation</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000273/pdfft?md5=d102ce83a2ff8228dd333428f7d3bf8e&pid=1-s2.0-S2667393223000273-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139107227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Principled bundle block adjustment with multi-head cameras","authors":"Eleonora Maset , Luca Magri , Andrea Fusiello","doi":"10.1016/j.ophoto.2023.100051","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100051","url":null,"abstract":"<div><p>This paper examines the effects of implementing relative orientation constraints on bundle adjustment, as well as provides a full derivation of the Jacobian matrix for such an adjustment, that can be used to facilitate other implementations of bundle adjustment with constrained cameras. We present empirical evidence demonstrating improved accuracy and reduced computational load when these constraints are imposed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000224/pdfft?md5=104b2b21116c9955ace52700652a666b&pid=1-s2.0-S2667393223000224-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139111422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mieke Kuschnerus , Roderik Lindenbergh , Sander Vos , Ramon Hanssen
{"title":"Statistically assessing vertical change on a sandy beach from permanent laser scanning time series","authors":"Mieke Kuschnerus , Roderik Lindenbergh , Sander Vos , Ramon Hanssen","doi":"10.1016/j.ophoto.2023.100055","DOIUrl":"https://doi.org/10.1016/j.ophoto.2023.100055","url":null,"abstract":"<div><p>In the view of climate change, understanding and managing effects on coastal areas and adjacent cities is essential. Permanent Laser Scanning (PLS) is a successful technique to not only observe notably sandy coasts incidentally or once every year, but (nearly) continuously over extended periods of time. The collected point cloud observations form a 4D point cloud data set representing the evolution of the coast provide the opportunity to assess change processes at high level of detail. For an exemplary location in Noordwijk, The Netherlands, three years of hourly point clouds were acquired on a 1 km long section of a typical Dutch urban sandy beach. Often, the so-called level of detection is used to assess point cloud differences from two epochs. To explicitly incorporate the temporal dimension of the height estimates from the point cloud data set, we revisit statistical testing theory. We apply multiple hypothesis testing on elevation time series in order to identify different coastal processes, like aeolian sand transport or bulldozer works. We then estimate the minimal detectable bias for different alternative hypotheses, to quantify the minimal elevation change that can be estimated from the PLS observations over a certain period of time. Additionally, we analyse potential error sources and influences on the elevation estimations and provide orders of magnitudes and possible ways to deal with them. Finally we conclude that elevation time series from a long term PLS data set are a suitable input to identify aeolian sand transport with the help of multiple hypothesis testing. In our example case, slopes of 0.032 m/day and sudden changes of 0.031 m can be identified with statistical power of 80% and with 95% significance in 24-h time series on the upper beach. In the intertidal area the presented method allows to classify daily elevation time series over one month according to the dominating model (sudden change or linear trend) in either eroding or accreting behaviour.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000261/pdfft?md5=2b715eedb9e8c262b3b531332998a270&pid=1-s2.0-S2667393223000261-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139107208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariya Velikova, Juan Fernandez-Diaz, Craig Glennie
{"title":"ICESat-2 noise filtering using a point cloud neural network","authors":"Mariya Velikova, Juan Fernandez-Diaz, Craig Glennie","doi":"10.1016/j.ophoto.2023.100053","DOIUrl":"10.1016/j.ophoto.2023.100053","url":null,"abstract":"<div><p>The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"11 ","pages":"Article 100053"},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000248/pdfft?md5=90f41b323182f63f9bad036a38f7b9ea&pid=1-s2.0-S2667393223000248-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}