{"title":"Discriminative Semi-Supervised Generative Adversarial Network for Hyperspectral Anomaly Detection","authors":"T. Jiang, Weiying Xie, Yunsong Li, Q. Du","doi":"10.1109/IGARSS39084.2020.9323688","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9323688","url":null,"abstract":"Hyperspectral anomaly detection has been facing great challenges in the field of deep learning due to high dimensions and limited samples. To address these challenges, a novel discriminative semi-supervised generative adversarial network (GAN) method with dual RX (Reed-Xiaoli), called semiDRX, is proposed in this paper. The main contribution of the proposed method is to learn a reconstruction of background homogenization and anomaly saliency through a semi-supervised GAN. To achieve this goal, firstly, the coarse RX detection is performed to obtain a background sample set with potential anomalous pixels being removed. Secondly, the obtained coarse background set learns more comprehensive background characteristics through the network. The original hyperspectral image (HSI) is fed into the learned network to obtain reconstructions with homogeneous backgrounds and salient anomalies. The refined detection results are generated by a second RX detector. Experiments on three HSIs over different scenes demonstrate its advancement and effectiveness.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"641 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115114963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Siamese Generative Adversarial Network for Change Detection Under Different Scales","authors":"Mengxi Liu, Q. Shi, Penghua Liu, Cheng Wan","doi":"10.1109/IGARSS39084.2020.9323499","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9323499","url":null,"abstract":"Change detection methods based on low-resolution (LR) images with higher temporal resolution often lead to fuzzy results, while high-resolution images (HRIs) can provide more detailed information to solve this problem. However, it's hard to obtain two tiles of HRIs with high-quality for rapid change detection in actual production due to low temporal resolution and high cost. Therefore, it is necessary to explore a change detection method combing low- and high-resolution images to acquire urban change areas more accurately and quickly. In this paper, an end-to-end siamese generative adversarial network (SiamGAN) integrating a super resolution network and the siamese structure was proposed for change detection under different scales. The super-resolution network is used to reconstruct low-resolution images into high-resolution images, while the siamese structure is adopted as the classification network to detect changes. In the experiments, SiamGAN achieved an F1 of 76.06% and an IoU of 61.52% in the test set, which is respectively 5.68% and 6.92% higher than the CNN-based methods using LR images after bicubic interpolation. The results show that our proposed method can effectively overcome difference in scale between low- and high-resolution images and perform change detection more precisely and rapidly.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115654303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Domain Adaptation for Semantic Segmentation of Aerial Imagery Using Cycle-Consistent Adversarial Networks","authors":"F. Schenkel, W. Middelmann","doi":"10.1109/IGARSS39084.2020.9323650","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9323650","url":null,"abstract":"Semantic segmentation is an important computer vision task for the analysis of aerial imagery in many remote sensing applications. Due to the large availability of data it is possible to design efficient convolutional neural network based deep learning models for this purpose. But these methods usually show a weak performance when they are applied without any modifications to data from another domain with different characteristics relating to aspects concerning the sensor or environmental influences. To improve the performance of these methods domain adaptation approaches can be employed. In the following work, we want to present a method for unsupervised domain adaptation for semantic segmentation. We trained an encoder-decoder model on the source domain dataset as task application and adjusted the network to the target domain. The adaptation process is based on a style transfer component, which is realized using a cycle-consistent adversarial network. Through a continuous adaptation of the task model we achieved a higher generalization of the network and increased the task method performance on the target domain.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115733641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Global-Aware Deep Network for Road Detection of Very High Resolution Remote Sensing Imagery","authors":"Xiaoyan Lu, Yanfei Zhong, Zhuo Zheng","doi":"10.1109/IGARSS39084.2020.9323155","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9323155","url":null,"abstract":"Road detection from very high-resolution (VHR) remote sensing imagery has great importance in a broad array of applications. However, the most advanced deep learning-based methods often produce fragmented road segments, due to the complex backgrounds of images, such as the occlusions and shadows caused by the trees and buildings, or the surrounding objects with similar textures. In this paper, the characteristics of existing models were analyzed and an effective road recognition method was explored, we found that capturing long-range dependencies helps improve road recognition. Therefore, a novel global-aware deep network (GAN) for road detection is proposed, in which the spatial-aware module (SAM) was applied to capture spatial context dependencies and the channel-aware module (CAM) was applied to capture the interchannel dependencies. Through establishing the relationships between spatial contexts and between channels, the GAN could effectively alleviate the road recognition problems, and the advantages of the proposed approach were validated on the public DeepGlobe road dataset. The experimental result demonstrates the superiority of our method.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117139392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. West, D. Yocky, Brian J. Redman, J. D. Laan, Dylan Z. Anderson
{"title":"Optical and Polarimetric SAR Data Fusion Terrain Classification Using Probabilistic Feature Fusion","authors":"R. West, D. Yocky, Brian J. Redman, J. D. Laan, Dylan Z. Anderson","doi":"10.1109/IGARSS39084.2020.9324022","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9324022","url":null,"abstract":"Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117183522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monitoring the Changes of the Arctic Environment with the Joint Polar Satellite System (JPSS) Sounding Data Products","authors":"Lihang Zhou","doi":"10.1109/IGARSS39084.2020.9323874","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9323874","url":null,"abstract":"The latest Arctic Report Card indicated that the Arctic ecosystems and communities are increasingly at risk due to continued warming and declining sea ice [1]. With the high spatial, spectral resolution, and the high temporal resolution over the Arctic regions, the data products derived from Joint Polar Satellite System (JPSS) provide very useful information to monitor the rapid changes of the Arctic Environment. The applications of using the JPSS imaging and sounding data products for Arctic monitoring, such as the status and the changes of the temperature, moisture, trace gases, snow and ice, as well as the outgoing longwave radiation budget will be introduced in this paper.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117232069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Small Object Change Detection Based on Multitask Siamese Network","authors":"Shreya Sharma, Eiji Kaneko, M. Toda","doi":"10.1109/IGARSS39084.2020.9324150","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9324150","url":null,"abstract":"This paper presents a small object, represented by approximately ten pixels in an image, change detection method based on multitask Siamese network for multitemporal SAR images. In our proposed method, not only change detection task but also object classification task is introduced to the network. The classification task is expected to enhance the performance of change detection by providing semantic information of changes and to focus attention of the network towards the target small object class. We tested the proposed method for a real-world application of car parking lot monitoring with 1-meter resolution TerraSAR-X images. Experimental results show that the f-measure of change class is improved by more than 7% over conventional methods based on post-classification, PCA+K-means and Siamese network. Furthermore, car-to-car type change is detected by the proposed method with 25% higher accuracy over the method without the classification task.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117347680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Petteri Nevavuori, T. Lipping, Nathaniel G. Narra, Petri Linna
{"title":"Assessment of Cloud Cover in Sentinel-2 Data Using Random Forest Classifier","authors":"Petteri Nevavuori, T. Lipping, Nathaniel G. Narra, Petri Linna","doi":"10.1109/IGARSS39084.2020.9323683","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9323683","url":null,"abstract":"In this paper, a novel cloud coverage assessment method for the Sentinel-2 data is presented. The method is based on the Random Forest classifier and the target values used in the training process are obtained by comparing the NDVI indexes calculated from the satellite and the UAV data. The developed method is shown to outperform the Sentinel Cloud Probability Mask (CLDPRB) and Scene Classification (SCL) data layers in detecting cloudy areas.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"61 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120939644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sparse Representation-Based Image Fusion for Multi-Source NDVI Change Detection","authors":"Mengliang Zhang, Yuerong Chen, Song Li, Xin Tian","doi":"10.1109/IGARSS39084.2020.9324353","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9324353","url":null,"abstract":"The normalized differential vegetation index (NDVI) is a useful index for change detection in remote sensing vegetation analysis. Multi-source NDVI change detection, which utilizes the NDVI information at different time from multiple satellites, can solve the problem of long-revisiting periods for a single source (satellite). However, the spatial resolution of NDVI calculated from the multispectral images of different satellites is different. A sparse representation-based image fusion method is proposed to improve the spatial resolution of NDVI. First, a high spatial-resolution vegetation index (HRVI) is utilized. The proposed method is based on the assumption that NDVI and HRVI with different resolutions will have the same sparse coefficients under some specific dictionaries. In the experiment, the proposed method is compared with several state-of-the-art methods to demonstrate its efficiency. Furthermore, its application in multi-source NDVI change detection verified by datasets from GF-1 and GF-2 satellites.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127056515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stacked Random Forests: More Accurate and Better Calibrated","authors":"R. Hänsch","doi":"10.1109/IGARSS39084.2020.9324475","DOIUrl":"https://doi.org/10.1109/IGARSS39084.2020.9324475","url":null,"abstract":"Stacked Random Forests (SRFs) sequentially apply multiple Random Forests (RFs) where each instance uses the estimate of the predecessor as additional input to further refine the prediction. They have been shown to improve the performance for semantic segmentation of Polarimetric Synthetic Aperture Radar (PolSAR) images. Both, RFs and SRFs, not only provide an estimate of the class label of a query sample, but instead make a probabilistic prediction, i.e. provide the full class posterior. The probabilistic predictions of RFs are known to be usually well calibrated (i.e. the predictions match the expected probability distributions of each class). This paper answers the question whether stacking leads to overfitting on the training data or decreases the calibration quality of RFs. Results indicate that neither is the case. Instead, classification accuracy steadily increases and then saturates quickly after only a few stacking levels. The predicted probabilities are generally well calibrated where calibration quality also increases slightly for higher stacking levels.","PeriodicalId":444267,"journal":{"name":"IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}