{"title":"Comparative Assessment of Target-Detection Algorithms for Urban Targets Using Hyperspectral Data","authors":"Shalini Gakhar, K. C. Tiwari","doi":"10.14358/pers.87.5.349","DOIUrl":"https://doi.org/10.14358/pers.87.5.349","url":null,"abstract":"Hyperspectral data present better opportunities to exploit the treasure of spectral and spatial content that lies within their spectral bands. Hyperspectral data are increasingly being considered for exploring levels of urbanization, due to their capability to capture the spectral variability\u0000 that a modern urban landscape offers. Data and algorithms are two sides of a coin: while the data capture the variations, the algorithms provide suitable methods to extract relevant information. The literature reports a variety of algorithms for extraction of urban information from any given\u0000 data, with varying accuracies. This article aims to explore the binary-classifier approach to target detection to extract certain features. Roads and roofs are the most common features present in any urban scene. These experiments were conducted on a subset of AVIRIS-NG hyperspectral data\u0000 from the Udaipur region of India, with roads and roofs as targets. Four categories of target-detection algorithms are identified from a literature survey and our previous experience—distance measures, angle-based measures, information measures, and machine-learning measures—followed\u0000 by performance evaluation. The article also presents a brief taxonomy of algorithms; explores methods such as the Mahalanobis angle, which has been reported to be effective for extraction of urban targets; and explores newer machine-learning algorithms to increase accuracy. This work is likely\u0000 to aid in city planning, sustainable development, and various other governmental and nongovernmental efforts related to urbanization.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85499753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Real-Time Photogrammetric System for Acquisition and Monitoring of Three-Dimensional Human Body Kinematics","authors":"Long Chen, Bo Wu, Yao Zhao, Yuan Li","doi":"10.14358/pers.87.5.363","DOIUrl":"https://doi.org/10.14358/pers.87.5.363","url":null,"abstract":"Real-time acquisition and analysis of three-dimensional (3D) human body kinematics are essential in many applications. In this paper, we present a real-time photogrammetric system consisting of a stereo pair of red-green-blue (RGB) cameras. The system incorporates a multi-threaded and\u0000 graphics processing unit (GPU)-accelerated solution for real-time extraction of 3D human kinematics. A deep learning approach is adopted to automatically extract two-dimensional (2D) human body features, which are then converted to 3D features based on photogrammetric processing, including\u0000 dense image matching and triangulation. The multi-threading scheme and GPU-acceleration enable real-time acquisition and monitoring of 3D human body kinematics. Experimental analysis verified that the system processing rate reached ∼18 frames per second. The effective detection distance\u0000 reached 15 m, with a geometric accuracy of better than 1% of the distance within a range of 12 m. The real-time measurement accuracy for human body kinematics ranged from 0.8% to 7.5%. The results suggest that the proposed system is capable of real-time acquisition and monitoring of 3D human\u0000 kinematics with favorable performance, showing great potential for various applications.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75263766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cartography. A Compendium of Design Thinking for Mapmakers","authors":"K. Field, Adam Steer","doi":"10.14358/pers.87.5.322","DOIUrl":"https://doi.org/10.14358/pers.87.5.322","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88706997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GIS Tips & Tricks—Understanding Aerial Triangulation","authors":"D. Maune, A. Karlin","doi":"10.14358/pers.87.5.319","DOIUrl":"https://doi.org/10.14358/pers.87.5.319","url":null,"abstract":"This month’s column is a bit of a twist on the “standard” GIS Tips & Tricks and focuses on a highly technical area of photogrammetry, namely Aerial Triangulation and gives us a brief history of the technology. Dr. David Maune contributed this column and he opens up the “black box” for a little trickery that enables low-cost, high-precision imagery. Enjoy. Today, Aerial Triangulation (AT) is performed with “black box” technology that most users don’t understand. My “trick” in teaching AT is to review all generations of photogrammetry that led to today’s digital photogrammetry and Structure from Motion (SfM).","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80703696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inversion of Solar-Induced Chlorophyll Fluorescence Using Polarization Measurements of Vegetation","authors":"Haiyan Yao, Ziying Li, Yang Han, Haofang Niu, Tianyi Hao, Yuyu Zhou","doi":"10.14358/pers.87.5.331","DOIUrl":"https://doi.org/10.14358/pers.87.5.331","url":null,"abstract":"In vegetation remote sensing, the apparent radiation of the vegetation canopy is often combined with three components derived from different parts of vegetation that have different production mechanisms and optical properties: volume scattering Lvol, polarized light Lpol,\u0000 and chlorophyll fluorescence ChlF. The chlorophyll fluorescence plays a very important role in vegetation remote sensing, and the polarization information in vegetation remote sensing has become an effective way to characterize the physical characteristics of vegetation. This study analyzes\u0000 the difference between these three types of radiation flux and utilizes polarization radiation to separate them from the apparent radiation of the vegetation canopy. Specifically, solar-induced chlorophyll fluorescence is extracted from vegetation canopy radiation data using standard Fraunhofer-line\u0000 discrimination. The results show that polarization measurements can quantitatively separate Lvol, Lpol, and ChlF and extract the solar-induced chlorophyll fluorescence. This study improves our understanding of the light-scattering properties of vegetation canopies and\u0000 provides insights for developing building models and research algorithms.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76047086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scene Classification of Remotely Sensed Images via Densely Connected Convolutional Neural Networks and an Ensemble Classifier","authors":"Q. Cheng, Yuan Xu, Peng Fu, Jinling Li, Wen Wang, Y. Ren","doi":"10.14358/PERS.87.3.295","DOIUrl":"https://doi.org/10.14358/PERS.87.3.295","url":null,"abstract":"Deep learning techniques, especially convolutional neural networks, have boosted performance in analyzing and understanding remotely sensed images to a great extent. However, existing scene-classification methods generally neglect local and spatial information that is vital to scene\u0000 classification of remotely sensed images. In this study, a method of scene classification for remotely sensed images based on pretrained densely connected convolutional neural networks combined with an ensemble classifier is proposed to tackle the under-utilization of local and spatial information\u0000 for image classification. Specifically, we first exploit the pretrained DenseNet and fine-tuned it to release its potential in remote-sensing image feature representation. Second, a spatial-pyramid structure and an improved Fisher-vector coding strategy are leveraged to further strengthen\u0000 representation capability and the robustness of the feature map captured from convolutional layers. Then we integrate an ensemble classifier in our network architecture considering that lower attention to feature descriptors. Extensive experiments are conducted, and the proposed method achieves\u0000 superior performance on UC Merced, AID, and NWPU-RESISC45 data sets.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47399327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Digital Terrain Modeling Method in Urban Areas by the ICESat-2 (Generating precise terrain surface profiles from photon-counting technology)","authors":"Nahed Osama, Bisheng Yang, Yue Ma, Mohamed Freeshah","doi":"10.14358/PERS.87.4.237","DOIUrl":"https://doi.org/10.14358/PERS.87.4.237","url":null,"abstract":"The ICE, Cloud and land Elevation Satellite-2 (ICES at-2) can provide new measurements of the Earth's elevations through photon-counting technology. Most research has focused on extracting the ground and the canopy photons in vegetated areas. Yet the extraction of the ground photons\u0000 from urban areas, where the vegetation is mixed with artificial constructions, has not been fully investigated. This article proposes a new method to estimate the ground surface elevations in urban areas. The ICES at-2 signal photons were detected by the improved Density-Based Spatial Clustering\u0000 of Applications with Noise algorithm and the Advanced Topographic Laser Altimeter System algorithm. The Advanced Land Observing Satellite-1 PALSAR –derived digital surface model has been utilized to separate the terrain surface from the ICES at-2 data. A set of ground-truth data was\u0000 used to evaluate the accuracy of these two methods, and the achieved accuracy was up to 2.7 cm, which makes our method effective and accurate in determining the ground elevation in urban scenes.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42851596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Liu, Yujie Sun, Shikang Tao, Min Wang, Qian Shen, Jiru Huang
{"title":"Discovering Potential Illegal Construction Within Building Roofs from UAV Images Using Semantic Segmentation and Object-Based Change Detection","authors":"Yang Liu, Yujie Sun, Shikang Tao, Min Wang, Qian Shen, Jiru Huang","doi":"10.14358/PERS.87.4.263","DOIUrl":"https://doi.org/10.14358/PERS.87.4.263","url":null,"abstract":"A novel potential illegal construction (PIC) detection method by bitemporal unmanned aerial vehicle (UAV ) image comparison (change detection) within building roof areas is proposed. In this method, roofs are first extracted from UAV images using a depth-channel improved UNet model.\u0000 A two-step change detection scheme is then implemented for PIC detection. In the change detection stage, roofs with appearance, disappearance, and shape changes are first extracted by morphological analysis. Subroof primitives are then obtained by roof-constrained image segmentation within\u0000 the remaining roof areas, and object-based iteratively reweighted multivariate alteration detection (IR-MAD ) is implemented to extract the small PICs from the subroof primitives. The proposed method organically combines deep learning and object-based image analysis, which can identify entire\u0000 roof changes and locate small object changes within the roofs. Experiments show that the proposed method has better accuracy compared with the other counterparts, including the original IR-MAD, change vector analysis, and principal components analysis-K-means.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42408808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parsing of Urban Facades from 3D Point Clouds Based on a Novel Multi-View Domain","authors":"Wei Wang, Yuanzi Xu, Y. Ren, Gang Wang","doi":"10.14358/PERS.87.4.283","DOIUrl":"https://doi.org/10.14358/PERS.87.4.283","url":null,"abstract":"Recently, performance improvement in facade parsing from 3D point clouds has been brought about by designing more complex network structures, which cost huge computing resources and do not take full advantage of prior knowledge of facade structure. Instead, from the perspective of data\u0000 distribution, we construct a new hierarchical mesh multi-view data domain based on the characteristics of facade objects to achieve fusion of deep-learning models and prior knowledge, thereby significantly improving segmentation accuracy. We comprehensively evaluate the current mainstream\u0000 method on the RueMonge 2014 data set and demonstrate the superiority of our method. The mean intersection-over-union index on the facade-parsing task reached 76.41%, which is 2.75% higher than the current best result. In addition, through comparative experiments, the reasons for the performance\u0000 improvement of the proposed method are further analyzed.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48822100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}