Yuanzheng Cui, Kaifang Shi, Lei Jiang, Lefeng Qiu, Shaohua Wu
{"title":"Identifying and Evaluating the Nighttime Economy in China Using Multisource Data","authors":"Yuanzheng Cui, Kaifang Shi, Lei Jiang, Lefeng Qiu, Shaohua Wu","doi":"10.1109/lgrs.2020.3010936","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3010936","url":null,"abstract":"The nighttime economy has always been regarded as an important part of the economy. Monitoring and evaluating the nighttime economic level is of great significance for promoting consumption and economic growth and optimizing industrial structure. However, it is difficult to evaluate the nighttime economy in China due to the data being unavailable. Hence, the objective of this study is to identify and evaluate the nighttime economy in China from different perspectives. First, a comprehensive nighttime economic index (CNEI) was constructed by integrating the nighttime light intensity and the points of interest data to represent the nighttime economic level. The CNEI was then verified using the business report data and socioeconomic statistical data. The results show that the CNEI is highly correlated with the verified data. We also found that Shanghai, Chengdu, Guangzhou, and Shenzhen have the highest CNEI values, and the CNEI values of southern cities are generally higher than those of northern cities. This is mainly because the differences in the lifestyles, climatic factors, and cultural customs in the north and south determine the nighttime economic activities. Counties with very high CNEI values are mostly located in the capital cities of each province. The spatial agglomeration at the county level performed more strongly than that at the prefecture level. The study will not only help better understand the nighttime economic level on different scales but also contribute to city-level policymaking on urban planning and economic development.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1906-1910"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3010936","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47826499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extension of Image Data Using Generative Adversarial Networks and Application to Identification of Aurora","authors":"Aoi Uchino, M. Matsumoto","doi":"10.1109/lgrs.2020.3012620","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3012620","url":null,"abstract":"In recent years, automatic auroral image classification has been actively investigated. The baseline method has relied on supervised learning. As this approach requires a large amount of labeled teacher data, it is necessary to collect the data manually and label them, which is a time-consuming task. In this study, we proposed a method to extend an image data set by inputting training images into a deep convolutional generative adversarial network (DCGAN) and generating images in this manner. The proposed approach implied using both generated and original images to train the classifier. It could reduce the number of labeling operations performed manually. As an evaluation experiment, we performed classifier learning on the data sets before and after extension and confirmed that the classification accuracy was improved because of training on the data set after the extension.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1941-1945"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3012620","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Filtering Method for ICESat-2 Photon Point Cloud Data Based on Relative Neighboring Relationship and Local Weighted Distance Statistics","authors":"Yi Li, Haiqiang Fu, Jianjun Zhu, Changcheng Wang","doi":"10.1109/lgrs.2020.3011215","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3011215","url":null,"abstract":"The existing local distance statistics-based filtering method for photon point cloud data is greatly affected by the input parameter (number of photon neighbors) and has a poor ability to remove noise photons that are adjacent to signal photons. In this letter, the relative neighboring relationship (RNR) is proposed to describe the relative density distribution of the neighboring photon points around two photon points. The mean local weighted distance is then defined, which is used to enhance the discrimination between the noise photons adjacent to the signal photons and the signal photons. Finally, according to the statistical characteristics of the mean local weighted distance, two strategies for threshold selection are used to separate signal photons from noise photons. ICESat-2 data acquired over tropical forest were used to verify the performance of the proposed method, and the results showed that: 1) the proposed method has a better ability to remove the noise photons adjacent to signal photons and 2) its performance is not greatly dependent on the input parameter.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1891-1895"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3011215","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46613179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Improved Map-Drift Algorithm for Unmanned Aerial Vehicle SAR Imaging","authors":"Y. Huang, Fei Liu, Zhanye Chen, Jie Li, Wei Hong","doi":"10.1109/lgrs.2020.3011973","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3011973","url":null,"abstract":"Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is usually sensitive to trajectory deviations that cause severe motion error in the recorded data. Because of the small size of the UAV, it is difficult to carry a high-accuracy inertial navigation system. Therefore, in order to obtain a precise SAR imagery, autofocus algorithms, such as phase gradient autofocus (PGA) method and map-drift (MD) algorithm, were proposed to compensate the motion error based on the received signal, but most of them worked on range-invariant motion error and abundant prominent scatterers. In this letter, an improved MD algorithm is proposed to compensate the range-variant motion error compared to the existed MD algorithm. In this context, in order to solve the outliers caused by homogeneous scenes or absent prominent scatterers, a random sample consensus (RANSAC) algorithm is employed to mitigate the influence resulting from the outliers, realizing robust performance for different cases. Finally, real SAR data are applied to demonstrate the effectiveness of the proposed method.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1966-1970"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3011973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43283988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhicheng Zhao, Jiaqi Li, Ze Luo, Jian Li, Can Chen
{"title":"Remote Sensing Image Scene Classification Based on an Enhanced Attention Module","authors":"Zhicheng Zhao, Jiaqi Li, Ze Luo, Jian Li, Can Chen","doi":"10.1109/lgrs.2020.3011405","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3011405","url":null,"abstract":"Classifying different satellite remote sensing scenes is a very important subtask in the field of remote sensing image interpretation. With the recent development of convolutional neural networks (CNNs), remote sensing scene classification methods have continued to improve. However, the use of recognition methods based on CNNs is challenging because the background of remote sensing image scenes is complex and many small objects often appear in these scenes. In this letter, to improve the feature extraction and generalization abilities of deep neural networks so that they can learn more discriminative features, an enhanced attention module (EAM) was designed. Our proposed method achieved very competitive performance—94.29% accuracy on NWPU-RESISC45 and state-of-the-art performance on different remote sensing scene recognition data sets. The experimental results show that the proposed method can learn more discriminative features than state-of-the-art methods, and it can effectively improve the accuracy of scene classification for remote sensing images. Our code is available at https://github.com/williamzhao95/Pay-More-Attention.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1926-1930"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3011405","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45491244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Kim, Hyunseong Kang, Seongwook Lee, Seong‐Ook Park
{"title":"Improved Drone Classification Using Polarimetric Merged-Doppler Images","authors":"B. Kim, Hyunseong Kang, Seongwook Lee, Seong‐Ook Park","doi":"10.1109/lgrs.2020.3011114","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3011114","url":null,"abstract":"We propose a drone classification method for polarimetric radar, based on convolutional neural network (CNN) and image processing methods. The proposed method improves drone classification accuracy when the micro-Doppler signature is very weak by the aspect angle. To utilize received polarimetric signal, we propose a novel image structure for three-channel image classification CNN. To reduce the size of data from four different polarization while securing high classification accuracy, an image processing method and structure are introduced. The data set is prepared for a three type of drone, with a polarimetric Ku-band frequency modulated continuous wave (FMCW) radar system. Proposed method is tested and verified in an anechoic chamber environment for fast evaluation. A famous CNN structure, GoogLeNet, is used to evaluate the effect of the proposed radar preprocessing. The result showed that the proposed method improved the accuracy from 89.9% to 99.8%, compared with single polarized micro-Doppler image. We compared the result from the proposed method with conventional polarimetric radar image structure and achieved similar accuracy while having half of full polarimetric data.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1946-1950"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3011114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45296531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Table of contents","authors":"","doi":"10.1109/lgrs.2021.3118255","DOIUrl":"https://doi.org/10.1109/lgrs.2021.3118255","url":null,"abstract":"","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"1 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41792890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Martian Topographic Roughness Spectra and Its Influence on Bistatic Radar Scattering","authors":"Yu Liu, Ying Yang, Kun Shan Chen","doi":"10.1109/lgrs.2020.3012427","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3012427","url":null,"abstract":"There are few studies on predicting fully bistatic scattering from the rough surface of Mars, though some bistatic radar observations have been made, such as in the MARS EXPRESS mission. To better understand the interaction of radar signals with a planetary surface in bistatic radar observations, the topographic-scale roughness of Mars, characterized by a two-dimensional power spectrum density (2D-PSD), is examined in view of its global roughness variations and scale dependence on geological units. The analysis shows that the Martian 2D-PSD is strongly dependent on the geological units and that it lies between Gaussian and exponential functions, with a power index equal to 1.9. The bistatic scattering coefficients are calculated by an advanced integral equation model (AIEM) with the 2D-PSD as the input. It shows that the specific surface roughness spectrum and the dielectric inhomogeneity should be taken into account in interpreting the bistatic radar scattering response.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1951-1955"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/lgrs.2020.3012427","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45871120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huizhang Yang, Chengzhi Chen, Shengyao Chen, Feng Xi, Zhong Liu
{"title":"SAR RFI Suppression for Extended Scene Using Interferometric Data via Joint Low-Rank and Sparse Optimization","authors":"Huizhang Yang, Chengzhi Chen, Shengyao Chen, Feng Xi, Zhong Liu","doi":"10.1109/lgrs.2020.3011547","DOIUrl":"https://doi.org/10.1109/lgrs.2020.3011547","url":null,"abstract":"Radio frequency interference (RFI) can significantly pollute synthetic aperture radar (SAR) data and images, which is also harmful to SAR interferometry (InSAR) for retrieving elevational information. To address this issue, in recent years, a class of advanced RFI suppression methods has been proposed based on narrowband properties of RFI and sparsity assumptions of radar echoes or target reflectivity. However, for SAR echoes and the associated scene reflectivity, these assumptions are usually not feasible when the imaged scene is spatially extended. In view of these problems, this study proposes an InSAR-based RFI suppression method for the case of extended scenes. For this task, we combine the RFI-polluted SAR data with RFI-free interferometric data to form an interferometric SAR data pair. We show that such an InSAR data pair embeds an interferogram having the image amplitude multiplying by a complex exponential interferometric phase. We treat the interferogram as a kind of natural image and use discrete Fourier cosine transform (DCT) for its sparse representation. Then combining the DCT-domain sparsity with low-rank modeling of RFI, we retrieve the interferogram and reconstruct the SAR image via joint low-rank and sparse optimization. Numerical simulations show that the proposed method can effectively recover SAR images and interferometric phases from RFI-polluted SAR data.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1976-1980"},"PeriodicalIF":4.8,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41751863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}