{"title":"Cloud Detection in ZY-3 Multi-Angle Remote Sensing Images","authors":"Haiyan Huang, Q. Cheng, Yin Pan, N. Lyimo, Hao Peng, Gui Cheng","doi":"10.14358/pers.21-00086r2","DOIUrl":"https://doi.org/10.14358/pers.21-00086r2","url":null,"abstract":"Cloud pollution on remote sensing images seriously affects the actual use rate of remote sensing images. Therefore, cloud detection of remote sensing images is an indispensable part of image preprocessing and image availability screening. Aiming at the lack of short wave infrared and\u0000 thermal infrared bands in ZY-3 high-resolution satellite images resulting in the poor detection effect, considering the obvious difference in geographic height between cloud and ground surface objects, this paper proposes a thick and thin cloud detection method combining spectral information\u0000 and digital height model (DHM) based on multi-scale features-convolutional neural network (MF-CNN) model. To verify the importance of DHM height information in cloud detection of ZY-3 multi-angle remote sensing images, this paper implements cloud detection comparison of the data set with and\u0000 without DHM height information based on the MF-CNN model. The experimental results show that the ZY-3 multi-angle image with DHM height information can effectively improve the confusion between highlighted surface and thin cloud, which also means the assistance of DHM height information can\u0000 make up for the disadvantage of high-resolution image lacking short wave infrared and thermal infrared bands.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81773301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Sample-Based Real-Time Spatiotemporal Spectral Unmixing","authors":"Xinyu Ding, Qunming Wang","doi":"10.14358/pers.21-00039r2","DOIUrl":"https://doi.org/10.14358/pers.21-00039r2","url":null,"abstract":"Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring,\u0000 the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be\u0000 unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time.\u0000 To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning\u0000 model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86334604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object and Pattern Recognition in Remote Sensing","authors":"S. Hinz, A. Braun, M. Weinmann","doi":"10.14358/pers.88.1.9","DOIUrl":"https://doi.org/10.14358/pers.88.1.9","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82568745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examining the Integration of Landsat Operational Land Imager with Sentinel-1 and Vegetation Indices in Mapping Southern Yellow Pines (Loblolly, Shortleaf, and Virginia Pines)","authors":"C. Akumu, Ezechirinum Amadi","doi":"10.14358/pers.21-00024r2","DOIUrl":"https://doi.org/10.14358/pers.21-00024r2","url":null,"abstract":"The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with\u0000 Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios:\u0000 Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared\u0000 percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy,\u0000 about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75443470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grids and Datums Update: This month we look at the Republic of Vanuatu","authors":"C. Mugnier","doi":"10.14358/pers.88.1.11","DOIUrl":"https://doi.org/10.14358/pers.88.1.11","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88213326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing Ding, Z. Shao, Xiao Huang, O. Altan, Yewen Fan
{"title":"Improving Urban Land Cover Mapping with the Fusion of Optical and SAR Data Based on Feature Selection Strategy","authors":"Qing Ding, Z. Shao, Xiao Huang, O. Altan, Yewen Fan","doi":"10.14358/pers.21-00030r2","DOIUrl":"https://doi.org/10.14358/pers.21-00030r2","url":null,"abstract":"Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results\u0000 showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the\u0000 best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with\u0000 the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81960703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Locust Invasion and Mitigation Using Remote Sensing Techniques: A Case Study of North Sindh Pakistan","authors":"Muhammad Nasar Ahmad, Z. Shao, O. Altan","doi":"10.14358/pers.21-00025r2","DOIUrl":"https://doi.org/10.14358/pers.21-00025r2","url":null,"abstract":"This study comprises the identification of the locust outbreak that happened in February 2020. It is not possible to conduct ground-based surveys to monitor such huge disasters in a timely and adequate manner. Therefore, we used a combination of automatic and manual remote sensing data\u0000 processing techniques to find out the aftereffects of locust attack effectively. We processed MODIS -normalized difference vegetation index (NDVI ) manually on ENVI and Landsat 8 NDVI using the Google Earth Engine (GEE ) cloud computing platform. We found from the results that, (a) NDVI computation\u0000 on GEE is more effective, prompt, and reliable compared with the results of manual NDVI computations; (b) there is a high effect of locust disasters in the northern part of Sindh, Thul, Ghari Khairo, Garhi Yaseen, Jacobabad, and Ubauro, which are more vulnerable; and (c) NDVI value suddenly\u0000 decreased to 0.68 from 0.92 in 2020 using Landsat NDVI and from 0.81 to 0.65 using MODIS satellite imagery. Results clearly indicate an abrupt decrease in vegetation in 2020 due to a locust disaster. That is a big threat to crop yield and food production because it provides a major portion\u0000 of food chain and gross domestic product for Sindh, Pakistan.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80468727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensing and Human Factors Research: A Review","authors":"Raechel A. Portelli, P. Pope","doi":"10.14358/pers.21-00012r2","DOIUrl":"https://doi.org/10.14358/pers.21-00012r2","url":null,"abstract":"Human experts are integral to the success of computational earth observation. They perform various visual decision-making tasks, from selecting data and training machine-learning algorithms to interpreting accuracy and credibility. Research concerning the various human factors which\u0000 affect performance has a long history within the fields of earth observation and the military. Shifts in the analytical environment from analog to digital workspaces necessitate continued research, focusing on human-in-the-loop processing. This article reviews the history of human-factors\u0000 research within the field of remote sensing and suggests a framework for refocusing the discipline's efforts to understand the role that humans play in earth observation.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75198542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-View Urban Scene Classification with a Complementary-Information Learning Model","authors":"Wanxuan Geng, Weixun Zhou, Shuanggen Jin","doi":"10.14358/pers.21-00062r2","DOIUrl":"https://doi.org/10.14358/pers.21-00062r2","url":null,"abstract":"Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views\u0000 is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to\u0000 learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are\u0000 extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that\u0000 it is an effective model for learning complementary information and thus improving urban scene classification.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77610927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}