{"title":"Instance segmentation of standing dead trees in dense forest from aerial imagery using deep learning","authors":"Abubakar Sani-Mohammed , Wei Yao , Marco Heurich","doi":"10.1016/j.ophoto.2022.100024","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100024","url":null,"abstract":"<div><p>Mapping standing dead trees, especially, in natural forests is very important for evaluation of the forest's health status, and its capability for storing Carbon, and the conservation of biodiversity. Apparently, natural forests have larger areas which renders the classical field surveying method very challenging, time-consuming, labor-intensive, and unsustainable. Thus, for effective forest management, there is the need for an automated approach that would be cost-effective. With the advent of Machine Learning, Deep Learning has proven to successfully achieve excellent results. This study presents an adjusted Mask R-CNN Deep Learning approach for detecting and segmenting standing dead trees in a mixed dense forest from CIR aerial imagery using a limited (195 images) training dataset. First, transfer learning is considered coupled with the image augmentation technique to leverage the limitation of training datasets. Then, we strategically selected hyperparameters to suit appropriately our model's architecture that fits well with our type of data (dead trees in images). Finally, to assess the generalization capability of our model's performance, a test dataset that was not confronted to the deep neural network was used for comprehensive evaluation. Our model recorded promising results reaching a mean average precision, average recall, and average F1-Score of 0.85, 0.88, and 0.87 respectively, despite our relatively low resolution (20 cm) dataset. Consequently, our model could be used for automation in standing dead tree detection and segmentation for enhanced forest management. This is equally significant for biodiversity conservation, and forest Carbon storage estimation.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"6 ","pages":"Article 100024"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000138/pdfft?md5=af9b87bf0aada51c275d40bd64597180&pid=1-s2.0-S2667393222000138-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90129549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic segmentation of bridge components and road infrastructure from mobile LiDAR data","authors":"Yi-Chun Lin, Ayman Habib","doi":"10.1016/j.ophoto.2022.100023","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100023","url":null,"abstract":"<div><p>Emerging mobile LiDAR mapping systems exhibit great potential as an alternative for mapping urban environments. Such systems can acquire high-quality, dense point clouds that capture detailed information over an area of interest through efficient field surveys. However, automatically recognizing and semantically segmenting different components from the point clouds with efficiency and high accuracy remains a challenge. Towards this end, this study proposes a semantic segmentation framework to simultaneously classify bridge components and road infrastructure using mobile LiDAR point clouds while providing the following contributions: 1) a deep learning approach exploiting graph convolutions is adopted for point cloud semantic segmentation; 2) cross-labeling and transfer learning techniques are developed to reduce the need for manual annotation; and 3) geometric quality control strategies are proposed to refine the semantic segmentation results. The proposed framework is evaluated using data from two mobile mapping systems along an interstate highway with 27 highway bridges. With the help of the proposed cross-labeling and transfer learning strategies, the deep learning model achieves an overall accuracy of 84% using limited training data. Moreover, the effectiveness of the proposed framework is verified through test covering approximately 42 miles along the interstate highway, where substantial improvement after quality control can be observed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"6 ","pages":"Article 100023"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000126/pdfft?md5=1a7c8156610784afa8001652976df1dc&pid=1-s2.0-S2667393222000126-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91599620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Denter , Julian Frey , Teja Kattenborn , Holger Weinacker , Thomas Seifert , Barbara Koch
{"title":"Assessment of camera focal length influence on canopy reconstruction quality","authors":"Martin Denter , Julian Frey , Teja Kattenborn , Holger Weinacker , Thomas Seifert , Barbara Koch","doi":"10.1016/j.ophoto.2022.100025","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100025","url":null,"abstract":"<div><p>Unoccupied aerial vehicles (UAV) with RGB-cameras are affordable and versatile devices for the generation of a series of remote sensing products that can be used for forest inventory tasks, such as creating high-resolution orthomosaics and canopy height models. The latter may serve purposes including tree species identification, forest damage assessments, canopy height or timber stock assessments. Besides flight and image acquisition parameters such as image overlap, flight height, and weather conditions, the focal length, which determines the opening angle of the camera lens, is a parameter that influences the reconstruction quality. Despite its importance, the effect of focal length on the quality of 3D reconstructions of forests has received little attention in the literature. Shorter focal lengths result in more accurate distance estimates in the nadir direction since small angular errors lead to large positional errors in narrow opening angles. In this study, 3D reconstructions of four UAV-acquisitions with different focal lengths (21, 35, 50, and 85 mm) on a 1 ha mature mixed forest plot were compared to reference point clouds derived from high quality Terrestrial Laser Scans. Shorter focal lengths (21 and 35 mm) led to a higher agreement with the TLS scans and thus better reconstruction quality, while at 50 mm, quality losses were observed, and at 85 mm, the quality was considerably worse. F1-scores calculated from a voxel representation of the point clouds amounted to 0.254 with 35 mm and 0.201 with 85 mm. The precision with 21 mm focal length was 0.466 and 0.302 with 85 mm. We thus recommend a focal length no longer than 35 mm during UAV Structure from Motion (SfM) data acquisition for forest management practices.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"6 ","pages":"Article 100025"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266739322200014X/pdfft?md5=5069b0e26c4432e1604ba0f103f40aea&pid=1-s2.0-S266739322200014X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91774272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Mikolka-Flöry, Camillo Ressl, Lorenz Schimpl, Norbert Pfeifer
{"title":"Automatic orientation of historical terrestrial images in mountainous terrain using the visible horizon","authors":"Sebastian Mikolka-Flöry, Camillo Ressl, Lorenz Schimpl, Norbert Pfeifer","doi":"10.1016/j.ophoto.2022.100026","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100026","url":null,"abstract":"<div><p>Historical terrestrial images are the only visual sources documenting alpine environments shortly after the end of the Little Ice Age. Despite their unique value, they are largely unused for quantifying environmental changes because of the difficult and time-consuming estimation of the unknown camera parameters. For most images large parts of the captured scenery have vastly changed over time, making automatic feature point matching infeasible. In contrast, the visible image horizon seems to remain stable over time and hence, appears to be a suitable feature for image orientation. Since the focal length is unknown for historical terrestrial images, existing methods, focusing solely on estimating the exterior orientation of recent imagery, can not be applied. Accordingly, it was investigated if the horizon is suitable to estimate both the interior and exterior orientation of historical terrestrial images, with an accuracy comparable to manually oriented images. In a first step, the whole horizon was used to approximate the unknown camera parameters, reducing the potential search space. In the subsequent spatial resection these approximations were further refined using salient points along the horizon. We evaluated our approach using 204 manually oriented reference images. With the proposed method the accuracy of the estimated exterior orientation could be significantly improved compared to previous works. Additionally, the unknown focal length was estimated within 5% of the true focal length for 75% of the images. As historical terrestrial images are commonly used for monoplotting, the accuracy for 2400 manually selected checkpoints was evaluated. This analysis showed that for 63% of the images the same accuracy as with manually oriented images was achieved. For additional 22% the estimated camera parameters were still accurate enough to serve as initial estimates for a subsequent manual orientation. In 15% of the images our method completely failed. Due to the vastly changing scenery and oblique viewing geometry, finding the initial camera parameters, in our experience, is often the most challenging and time consuming step during manual orientation of historical images. Hence, in 85% of the images this initial step can be replaced with our method, leading to a significantly reduced effort for orienting whole collections of historical terrestrial images.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"6 ","pages":"Article 100026"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000151/pdfft?md5=560a55014069e3f6aa4ce83222a5345a&pid=1-s2.0-S2667393222000151-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91599621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Instance segmentation of standing dead trees in dense forest from aerial imagery using deep learning","authors":"Abubakar Sani-Mohammed, W. Yao, M. Heurich","doi":"10.1016/j.ophoto.2022.100024","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100024","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74074054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Mikolka-Flöry, C. Ressl, Lorenz Schimpl, N. Pfeifer
{"title":"Automatic orientation of historical terrestrial images in mountainous terrain using the visible horizon","authors":"Sebastian Mikolka-Flöry, C. Ressl, Lorenz Schimpl, N. Pfeifer","doi":"10.1016/j.ophoto.2022.100026","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100026","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81681092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-temporal InSAR tropospheric delay modelling using Tikhonov regularization for Sentinel-1 C-band data","authors":"P. Kirui, B. Riedel, M. Gerke","doi":"10.1016/j.ophoto.2022.100020","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100020","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85374590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic segmentation of bridge components and road infrastructure from mobile LiDAR data","authors":"Y. Lin, A. Habib","doi":"10.1016/j.ophoto.2022.100023","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100023","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90901890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Observation distribution modelling and closed-from precision estimation of scanned 2D geometric features for network design","authors":"D. Lichti, K. Pexman, T. Chan","doi":"10.1016/j.ophoto.2022.100022","DOIUrl":"https://doi.org/10.1016/j.ophoto.2022.100022","url":null,"abstract":"","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83407098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joachim Gehrung , Marcus Hebel , Michael Arens , Uwe Stilla
{"title":"Change detection in street environments based on mobile laser scanning: A fuzzy spatial reasoning approach","authors":"Joachim Gehrung , Marcus Hebel , Michael Arens , Uwe Stilla","doi":"10.1016/j.ophoto.2022.100019","DOIUrl":"10.1016/j.ophoto.2022.100019","url":null,"abstract":"<div><p>Automated change detection based on urban mobile laser scanning data is the foundation for a whole range of applications such as building model updates, map generation for autonomous driving and natural disaster assessment. The challenge with mobile LiDAR data is that various sources of error, such as localization errors, lead to uncertainties and contradictions in the derived information. This paper presents an approach to automatic change detection using a new category of generic evidence grids that addresses the above problems. Said technique, referred to as <em>fuzzy spatial reasoning</em>, solves common problems of state-of-the-art evidence grids and also provides a method of inference utilizing fuzzy Boolean reasoning. Based on this, logical operations are used to determine changes and combine them with semantic information. A quantitative evaluation based on a hand-annotated version of the TUM-MLS data set shows that the proposed method is able to identify confirmed and changed elements of the environment with F1-scores of 0.93 and 0.89.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"5 ","pages":"Article 100019"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393222000084/pdfft?md5=c3af0a03a8609bef474fa2788d7a7fda&pid=1-s2.0-S2667393222000084-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78917265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}