ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences最新文献

筛选
英文 中文
A METHOD TO GENERATE FLOOD MAPS IN 3D USING DEM AND DEEP LEARNING 一种利用dem和深度学习生成三维洪水图的方法
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-25-2020
A. Gebrehiwot, L. Hashemi-Beni
{"title":"A METHOD TO GENERATE FLOOD MAPS IN 3D USING DEM AND DEEP LEARNING","authors":"A. Gebrehiwot, L. Hashemi-Beni","doi":"10.5194/isprs-archives-xliv-m-2-2020-25-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-25-2020","url":null,"abstract":"Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometry Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"64 1","pages":"25-28"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84700678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
THE CASE FOR LOW-COST, PERSONALIZED VISUALIZATION FOR ENHANCING NATURAL HAZARD PREPAREDNESS 为加强自然灾害防范而进行低成本、个性化可视化的案例
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-37-2020
Peter Gmelch, R. Lejano, Evan O'Keeffe, D. Laefer, Cady Drell, M. Bertolotto, U. Ofterdinger, Jennifer, McKinley
{"title":"THE CASE FOR LOW-COST, PERSONALIZED VISUALIZATION FOR ENHANCING NATURAL HAZARD PREPAREDNESS","authors":"Peter Gmelch, R. Lejano, Evan O'Keeffe, D. Laefer, Cady Drell, M. Bertolotto, U. Ofterdinger, Jennifer, McKinley","doi":"10.5194/isprs-archives-xliv-m-2-2020-37-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-37-2020","url":null,"abstract":"Abstract. Each year, lives are needlessly lost to floods due to residents failing to heed evacuation advisories. Risk communication research suggests that flood warnings need to be more vivid, contextualized, and visualizable, in order to engage the message recipient. This paper makes the case for the development of a low-cost augmented reality tool that enables individuals to visualize, at close range and in three-dimension, their homes, schools, and places of work and worship subjected to flooding (modeled upon a series of federally expected flood hazard levels). This paper also introduces initial tool development in this area and the related data input stream.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"101 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88236556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CNN-BASED PLACE RECOGNITION TECHNIQUE FOR LIDAR SLAM 基于cnn的激光雷达slam位置识别技术
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-117-2020
Y. Yang, S. Song, C. Toth
{"title":"CNN-BASED PLACE RECOGNITION TECHNIQUE FOR LIDAR SLAM","authors":"Y. Yang, S. Song, C. Toth","doi":"10.5194/isprs-archives-xliv-m-2-2020-117-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-117-2020","url":null,"abstract":"Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"97 1","pages":"117-122"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83938331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
STUDY OF ACTIVE FARMLAND USE TO SUPPORT AGENT-BASED MODELING OF FOOD DESERTS 积极耕地利用研究支持基于agent的食物沙漠模型
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-9-2020
S. Dhamankar, L. Hashemi-Beni, L. Kurkalova, C. Liang, T. Mulrooney, M. Jha, G. Monty, H. Miao
{"title":"STUDY OF ACTIVE FARMLAND USE TO SUPPORT AGENT-BASED MODELING OF FOOD DESERTS","authors":"S. Dhamankar, L. Hashemi-Beni, L. Kurkalova, C. Liang, T. Mulrooney, M. Jha, G. Monty, H. Miao","doi":"10.5194/isprs-archives-xliv-m-2-2020-9-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-9-2020","url":null,"abstract":"Abstract. Food desert (FD) is the area that has limited access to affordable and nutritious foods such as fresh fruits, vegetables, and other healthful whole foods. FDs are important socio-economic problems in North Carolina (NC), potentially contributing to obesity in low-income areas. If farmland is available, local vegetable production could potentially help alleviate FDs. However, little is known about land use and land-use transitions (LUTs) in the vicinity of FDs. To fill this knowledge gap, we study the farmland use in three NC counties, Bladen, Guilford and, Rutherford, located in Coastal, Piedmont, and, Mountain regions of the state, respectively. The analysis combines the United States Department of Agriculture (USDA) 2015 FD/NFD delineation of census tracts, and geospatial soil productivity and 2008–2019 land cover data. The understanding of farmland use is expected to contribute to the development of LUT components of FD Agent-Based Models (ABM).","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"74 7 1","pages":"9-13"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90960019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
THE ACCESSIBILITY AND SPATIAL PATTERNS OF GREEN OPEN SPACE BASED ON GIS 基于gis的绿色开放空间可达性及空间格局
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-55-2020
D. Liu, Y. Shi
{"title":"THE ACCESSIBILITY AND SPATIAL PATTERNS OF GREEN OPEN SPACE BASED ON GIS","authors":"D. Liu, Y. Shi","doi":"10.5194/isprs-archives-xliv-m-2-2020-55-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-55-2020","url":null,"abstract":"Abstract. Studies show that the green open space (GOS) is beneficial to visitors' mental and physical health and has positive social values. This study took four global cities as examples, namely Shanghai, Tokyo, New York and London. The per capita area, the coverage rate and the availability of GOS were calculated in this study. Then the GOS was classified according to the scales and morphological features. And the author analyzed the relations between availability and spatial patterns. The results showed that the four cities could be classified into two classes. Shanghai and Tokyo are high-population-density cities with medium GOS coverage and availability, and New York and London are medium-population-density cities with high GOS coverage and availability. It was found that the high GOS coverage rate did not necessarily lead to a higher availability. Shanghai and London could increase the amount of small GOS to ease the shortage of availability. And London and Tokyo could consider adding linear GOS to improve the connectivity of GOS.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"15 1","pages":"55-59"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81781936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEP LEARNING FOR REMOTE SENSING IMAGE CLASSIFICATION FOR AGRICULTURE APPLICATIONS 深度学习在农业遥感图像分类中的应用
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-51-2020
L. Hashemi-Beni, A. Gebrehiwot
{"title":"DEEP LEARNING FOR REMOTE SENSING IMAGE CLASSIFICATION FOR AGRICULTURE APPLICATIONS","authors":"L. Hashemi-Beni, A. Gebrehiwot","doi":"10.5194/isprs-archives-xliv-m-2-2020-51-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-51-2020","url":null,"abstract":"Abstract. This research examines the ability of deep learning methods for remote sensing image classification for agriculture applications. U-net and convolutional neural networks are fine-tuned, utilized and tested for crop/weed classification. The dataset for this study includes 60 top-down images of an organic carrots field, which was collected by an autonomous vehicle and labeled by experts. FCN-8s model achieved 75.1% accuracy on detecting weeds compared to 66.72% of U-net using 60 training images. However, the U-net model performed better on detecting crops which is 60.48% compared to 47.86% of FCN-8s.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"15 1","pages":"51-54"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73677266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
DIGITAL SURFACE MODEL DERIVED FROM UAS IMAGERY ASSESSMENT USING HIGH-PRECISION AERIAL LIDAR AS REFERENCE SURFACE 以高精度航空激光雷达为参考面,对卫星影像进行评估,得出数字表面模型
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-61-2020
J. Lopez, R. Munjy
{"title":"DIGITAL SURFACE MODEL DERIVED FROM UAS IMAGERY ASSESSMENT USING HIGH-PRECISION AERIAL LIDAR AS REFERENCE SURFACE","authors":"J. Lopez, R. Munjy","doi":"10.5194/isprs-archives-xliv-m-2-2020-61-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-61-2020","url":null,"abstract":"Abstract. Imagery captured from aerial unmanned systems (UAS) has found significant utility in the field of surveying and mapping as the efforts of the computer vision field combined the principles of photogrammetry. Its respectability in the remote sensing community as increased as the miniaturization of on-board survey-grade global navigation satellite system (GNSS) signal receivers has made it possible to produce high network accuracy contributing to effective aerotriangulation. UAS photogrammetry has gained much popularity because of its effectiveness, efficiency, economy, and especially its availability and ease of use. Although photogrammetry has proven to meet and exceed planimetric precision and accuracy, variables tend to cause deficiencies in the achievement of accuracy in the vertical plane. This research aims to demonstrate achievable overall accuracy of surface modelling through minimization of systematic errors at a significant level using a fixed-wing platform designed for high-accuracy surveying with the eBee Plus and X models by SenseFly equipped with survey-grade GNSS signal-receiving capabilities and 20MP integrated, fixed-focal length camera. The UAS campaign was flown over a site 320 m by 320 m with 81 surveyed 3D ground control points, where horizontal positions were surveyed to 1.0 cm horizontal accuracy and 0.5 cm vertical accuracy using static GNSS methods and digital leveling respectively. All AT accuracy was based on 75 independent checkpoints. The digital surface model (DSM) was compared to a reference DSM generated from high-precision manned aerial LiDAR using the Optech Galaxy scanner. Overall accuracy was in the sub-decimeter level vertically in both commercial software used, including Pix4Dmapper and Agisoft Metashape.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"1 1","pages":"61-67"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77338882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EXTRACTING BUILT-UP FEATURES IN COMPLEX BIOPHYSICAL ENVIRONMENTS BY USING A LANDSAT BANDS RATIO 利用陆地卫星波段比提取复杂生物物理环境中的建筑物特征
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-79-2020
A. H. N. Mfondoum, Paul Gérard Gbetkom, R. Cooper, Sofia Hakdaoui, M. B. Mansour Badamassi
{"title":"EXTRACTING BUILT-UP FEATURES IN COMPLEX BIOPHYSICAL ENVIRONMENTS BY USING A LANDSAT BANDS RATIO","authors":"A. H. N. Mfondoum, Paul Gérard Gbetkom, R. Cooper, Sofia Hakdaoui, M. B. Mansour Badamassi","doi":"10.5194/isprs-archives-xliv-m-2-2020-79-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-79-2020","url":null,"abstract":"Abstract. This paper addresses the remote sensing challenging field of urban mixed pixels on a medium spatial resolution satellite data. The tentatively named Normalized Difference Built-up and Surroundings Unmixing Index (NDBSUI) is proposed by using Landsat-8 Operational Land Imager (OLI) bands. It uses the Shortwave Infrared 2 (SWIR2) as the main wavelength, the SWIR1 with the red wavelengths, for the built-up extraction. A ratio is computed based on the normalization process and the application is made on six cities with different urban and environmental characteristics. The built-up of the experimental site of Yaounde is extracted with an overall accuracy of 95.51% and a kappa coefficient of 0.90. The NDBSUI is validated over five other sites, chosen according to Cameroon’s bioclimatic zoning. The results are satisfactory for the cities of Yokadouma and Kumba in the bimodal and monomodal rainfall zones, where overall accuracies are up to 98.9% and 97.5%, with kappa coefficients of 0.88 and 0.94 respectively, although these values are close to those of three other indices. However, in the cities of Foumban, Ngaoundere and Garoua, representing the western highlands, the high Guinea savannah and the Sudano-sahelian zones where built-up is more confused with soil features, overall accuracies of 97.06%, 95.29% and 74.86%, corresponding to 0.918, 0.89 and 0.42 kappa coefficients were recorded. Difference of accuracy with EBBI, NDBI and UI are up to 31.66%, confirming the NDBSUI efficiency to automate built-up extraction and unmixing from surrounding noises with less biases.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"51 1","pages":"79-85"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78301962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
EVALUATION OF CONVERTING LANDSAT DN TO TA AND SR VALUES ON SELECT SPECTRAL INDICES 在选定的光谱指标上对陆地卫星dn转换成ta和sr值的评价
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-29-2020
A. Gettinger, R. Sivanpillai
{"title":"EVALUATION OF CONVERTING LANDSAT DN TO TA AND SR VALUES ON SELECT SPECTRAL INDICES","authors":"A. Gettinger, R. Sivanpillai","doi":"10.5194/isprs-archives-xliv-m-2-2020-29-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-29-2020","url":null,"abstract":"Abstract. The complete archive of images collected across all Landsat missions has been reprocessed and categorized by the U.S. Geological Survey (USGS) into a three-tiered architecture: Real-time, Tier-1, and Tier-2. This tiered architecture ensures data compatibility and is convenient for acquiring high quality scenes for pixel-by-pixel change analyses. However, it is important to evaluate the effects of converting older Landsat images from digital numbers (DN) to top-of-the-atmosphere (TA) and surface reflectance (SR) values that are equivalent to more recent Landsat data. This study evaluated the effects of this conversion on spectral indices derived from Tier-1 (the highest quality) Landsat 5 and 8 scenes collected in 30 m spatial resolution. Spectral brightness and reflectance of mixed conifers, Northern Mixed Grass Prairie, deep water, shallow water, and edge water were extracted as DNs, TA, and SR values, respectively. Spectral indices were estimated and compared to determine if the analysis of these land cover classes or their conditions would differ depending on which preprocessed image type was used (DN, TA, or SR). Results from this study will be informative for others making use of indices with images from multiple Landsat satellites as well as for engineers planning to reprocess images for future Landsat collections. This time-series study showed that there was a significant difference between index values derived from three levels of pre-processing. Average index values of vegetation cover classes were consistently significantly different between levels of pre-processing whereas average water index values showed inconsistent significant differences between pre-processing levels.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"37 1","pages":"29-36"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90795115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDENTIFYING EPIPHYTES IN DRONES PHOTOS WITH A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK (C-GAN) 用条件生成对抗网络(c-gan)识别无人机照片中的附生植物
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-99-2020
A. Shashank, V. Sajithvariyar, V. Sowmya, K. Soman, R. Sivanpillai, G. Brown
{"title":"IDENTIFYING EPIPHYTES IN DRONES PHOTOS WITH A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK (C-GAN)","authors":"A. Shashank, V. Sajithvariyar, V. Sowmya, K. Soman, R. Sivanpillai, G. Brown","doi":"10.5194/isprs-archives-xliv-m-2-2020-99-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-99-2020","url":null,"abstract":"Abstract. Unmanned Aerial Vehicle (UAV) missions often collect large volumes of imagery data. However, not all images will have useful information, or be of sufficient quality. Manually sorting these images and selecting useful data are both time consuming and prone to interpreter bias. Deep neural network algorithms are capable of processing large image datasets and can be trained to identify specific targets. Generative Adversarial Networks (GANs) consist of two competing networks, Generator and Discriminator that can analyze, capture, and copy the variations within a given dataset. In this study, we selected a variant of GAN called Conditional-GAN that incorporates an additional label parameter, for identifying epiphytes in photos acquired by a UAV in forests within Costa Rica. We trained the network with 70%, 80%, and 90% of 119 photos containing the target epiphyte, Werauhia kupperiana (Bromeliaceae) and validated the algorithm’s performance using a validation data that were not used for training. The accuracy of the output was measured using structural similarity index measure (SSIM) index and histogram correlation (HC) coefficient. Results obtained in this study indicated that the output images generated by C-GAN were similar (average SSIM = 0.89–0.91 and average HC 0.97–0.99) to the analyst annotated images. However, C-GAN had difficulty to identify when the target plant was away from the camera, was not well lit, or covered by other plants. Results obtained in this study demonstrate the potential of C-GAN to reduce the time spent by botanists to identity epiphytes in images acquired by UAVs.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"16 1","pages":"99-104"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84406593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信