{"title":"GIS-Supervised Building Extraction With Label Noise-Adaptive Fully Convolutional Neural Network","authors":"Zenghui Zhang, Weiwei Guo, Mingjie Li, Wenxian Yu","doi":"10.1109/LGRS.2019.2963065","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2963065","url":null,"abstract":"Automatic building extraction from aerial or satellite images is a dense pixel prediction task for many applications. It demands a large number of clean label data to train a deep neural network for building extraction. But it is labor expensive to collect such pixel-wise annotated data manually. Fortunately, the building footprint data of geographic information system (GIS) maps provide a cheap way of generating building label data, but these labels are imperfect due to misalignment between the GIS maps and images. In this letter, we consider the task of learning a deep neural network to label images pixel-wise from such noisy label data for building extraction. To this end, we propose a general label noise-adaptive (NA) neural network framework consisting of a base network followed by an additional probability transition modular (PTM) which is introduced to capture the relationship between the true label and the noisy label. The parameters of the PTM can be estimated as part of the training process of the whole network by the off-the-shelf backpropagation algorithm. We conduct experiments on real-world data set to demonstrate that our proposed PTM can better handle noisy labels and improve the performance of convolutional neural networks (CNNs) trained on the noisy label data generated by GIS maps for building extraction. The experimental results indicate that being armed with our proposed PTM for fully CNN, it provides a promising solution to reduce manual annotation effort for the labor-expensive object extraction tasks from remote sensing images.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"17 1","pages":"2135-2139"},"PeriodicalIF":4.8,"publicationDate":"2020-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2963065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46085301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruno T. Kitano, C. Mendes, A. R. Geus, Henrique C. Oliveira, Jefferson R. Souza
{"title":"Corn Plant Counting Using Deep Learning and UAV Images","authors":"Bruno T. Kitano, C. Mendes, A. R. Geus, Henrique C. Oliveira, Jefferson R. Souza","doi":"10.1109/LGRS.2019.2930549","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2930549","url":null,"abstract":"The adoption of new technologies, such as unmanned aerial vehicles (UAVs), image processing, and machine learning, is disrupting traditional concepts in agriculture, with a new range of possibilities opening in its fields of research. Plant density is one of the most important corn (Zea mays L.) yield factors, yet its precise measurement after the emergence of plants is impractical in large-scale production fields due to the amount of labor required. This letter aims to develop techniques that enable corn plant counting and the automation of this process through deep learning and computational vision, using images of several corn crops obtained using a low-cost unmanned aerial vehicle (UAV) platform assembled with an RGB sensor.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":" ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2019-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2930549","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46032488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extreme Learning Machine-Based Heterogeneous Domain Adaptation for Classification of Hyperspectral Images","authors":"Li Zhou, Li Ma","doi":"10.1109/LGRS.2019.2909543","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2909543","url":null,"abstract":"An extreme learning machine (ELM)-based heterogeneous domain adaptation (HDA) algorithm is proposed for the classification of remote sensing images. In the adaptive ELM network, one hidden layer is used for the source data to provide the random features, whereas two hidden layers are set for target data to produce the random features as well as a transformation matrix. DA is achieved by constraining both the source data and the transformed target data to share the same output weights. Moreover, manifold regularization is adopted to preserve the local geometry of unlabeled target data. The proposed ELM-based HDA (EHDA) method is applied to cross-domain classification of remote sensing images, and the experimental results using multisensor remote sensing images demonstrate the effectiveness of the proposed approach.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"16 1","pages":"1781-1785"},"PeriodicalIF":4.8,"publicationDate":"2019-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2909543","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47798419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. He, Jiaqi Ji, Dandan Dong, Jun Wang, Jianping Fan
{"title":"Infrared and Visible Image Fusion Method by Using Hybrid Representation Learning","authors":"G. He, Jiaqi Ji, Dandan Dong, Jun Wang, Jianping Fan","doi":"10.1109/LGRS.2019.2907721","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2907721","url":null,"abstract":"For remote sensing image fusion, infrared and visible images have very different brightness due to their disparate imaging mechanisms, the result of which is that nontarget regions in the infrared image often affect the fusion of details in the visible image. This letter proposes a novel infrared and visible image fusion method basing hybrid representation learning by combining dictionary-learning-based joint sparse representation (JSR) and nonnegative sparse representation (NNSR). In the proposed method, different fusion strategies are adopted, respectively, for the mean image, which represents the primary energy information, and for the deaveraged image, which contains important detail features. Since the deaveraged image contains a large amount of high-frequency details information of the source image, JSR is utilized to sparsely and accurately extract the common and innovation features of the deaveraged image, thus, accurately merging high-frequency details in the deaveraged image. Then, the mean image represents low-frequency and overview features of the source image, according to NNSR, mean image is classified well-directed to different feature regions and then fused, respectively. Such proposed method, on the one hand, can eliminate the impact on fusion result suffering from very different brightness causing by different imaging mechanism between infrared and visible image; on the other hand, it can improve the readability and accuracy of the result fusion image. Experimental result shows that, compared with the classical and state-of-the-art fusion methods, the proposed method not only can accurately integrate the infrared target but also has rich background details of the visible image, and the fusion effect is superior.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"16 1","pages":"1796-1800"},"PeriodicalIF":4.8,"publicationDate":"2019-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2907721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44449172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ratha, D. Mandal, Vineet Kumar, H. Mcnairn, A. Bhattacharya, A. Frery
{"title":"A Generalized Volume Scattering Model-Based Vegetation Index From Polarimetric SAR Data","authors":"D. Ratha, D. Mandal, Vineet Kumar, H. Mcnairn, A. Bhattacharya, A. Frery","doi":"10.1109/LGRS.2019.2907703","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2907703","url":null,"abstract":"In this letter, we propose a novel vegetation index from polarimetric synthetic-aperture radar (PolSAR) data using the generalized volume scattering model. The geodesic distance between two Kennaugh matrices projected on a unit sphere proposed by Ratha et al. is used in this letter. This distance is utilized to compute a similarity measure between the observed Kennaugh matrix and generalized volume scattering models. A factor is estimated corresponding to the ratio of the minimum to the maximum geodesic distances between the observed Kennaugh matrix and the set of elementary targets: trihedral, cylinder, dihedral, and narrow dihedral. This factor is then scaled and multiplied with the similarity measure to obtain the novel vegetation index. The proposed vegetation index is compared with the radar vegetation index (RVI) proposed by Kim and van Zyl. A time series of RADARSAT-2 data acquired during the Soil Moisture Active Passive Validation Experiment 2016 (SMAPVEX16-MB) campaign in Manitoba, Canada, is used to assessing the proposed RVI.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"16 1","pages":"1791-1795"},"PeriodicalIF":4.8,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2907703","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44604132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discriminative Adaptation Regularization Framework-Based Transfer Learning for Ship Classification in SAR Images","authors":"Y. Xu, H. Lang, Lihui Niu, Chenguang Ge","doi":"10.1109/LGRS.2019.2907139","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2907139","url":null,"abstract":"Ship classification in synthetic-aperture radar (SAR) images is of great significance for dealing with various marine matters. Although traditional supervised learning methods have recently achieved dramatic successes, but they are limited by the insufficient labeled training data. This letter presents a novel unsupervised domain adaptation (DA) method, termed as discriminative adaptation regularization framework-based transfer learning (D-ARTL), to address the problem in case that there is no labeled training data available at all in the SAR image domain, i.e., target domain (TD). D-ARTL improves the original ARTL by adding a novel source discriminative information preservation (SDIP) regularization term. This improvement achieves an efficient transfer of interclass discriminative ability from source domain (SD) to TD, while achieving the alignment of cross-domain distributions. Extensive experiments have verified that D-ARTL outperforms state-of-the-art methods on the task of ship classification in SAR images by transferring the automatic identification system (AIS) information.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"16 1","pages":"1786-1790"},"PeriodicalIF":4.8,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2907139","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41760101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collaborative Cross-Domain $k$ NN Search for Remote Sensing Image Processing","authors":"Ying Zhong, Wei Weng, Jianmin Li, Shunzhi Zhu","doi":"10.1109/LGRS.2019.2906686","DOIUrl":"https://doi.org/10.1109/LGRS.2019.2906686","url":null,"abstract":"<inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>NN search is a fundamental function in image processing, which is useful in many real applications, including image cluster, image classification, and image understanding and analysis in general. In this light, we propose and study a novel collaborative cross-domain <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>NN search (CD-<inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>NN) in multidomain space. Given a query location <inline-formula> <tex-math notation=\"LaTeX\">$q$ </tex-math></inline-formula> in a multidomain space (e.g., spatial domain, temporal domain, textual domain, and so on), the CD-<inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>NN finds top-<inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula> data points with the minimum distance to <inline-formula> <tex-math notation=\"LaTeX\">$q$ </tex-math></inline-formula>. This problem is challenging due to two reasons. First, how to define practical distance measures to evaluate the distance in multidomain space. Second, how to prune the search space efficiently in multiple domains. To address the challenges, we define a linear combination method-based distance measure for multidomain space. Based on the distance measure, a collaborative search method is developed to constrain the CD search space in a comparable smaller range. A pair of upper and lower bounds is defined to prune the search space in multiple domains effectively. Finally, we conduct extensive experiments to verify that the developed methods can achieve a high performance.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"16 1","pages":"1801-1805"},"PeriodicalIF":4.8,"publicationDate":"2019-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/LGRS.2019.2906686","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46361514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Duncan, S. Farrell, J. Hutchings, J. Richter-Menge
{"title":"Late Winter Observations of Sea Ice Pressure Ridge Sail Height","authors":"K. Duncan, S. Farrell, J. Hutchings, J. Richter-Menge","doi":"10.1002/essoar.10500429.1","DOIUrl":"https://doi.org/10.1002/essoar.10500429.1","url":null,"abstract":"Analysis of high-resolution imagery acquired by the Digital Mapping System during annual, late-winter NASA Operation IceBridge surveys of Arctic sea ice between 2010 and 2018 reveals that pressure ridge sail heights (<inline-formula> <tex-math notation=\"LaTeX\">${H} _{mathbf {S}}$ </tex-math></inline-formula>) vary regionally and interannually. We find distinct differences in <inline-formula> <tex-math notation=\"LaTeX\">${H} _{mathbf {S}}$ </tex-math></inline-formula> distributions between the central Arctic (CA) and the Beaufort/Chukchi Seas region. Our results show that differences with respect to ice type occur within the tails of the <inline-formula> <tex-math notation=\"LaTeX\">${H} _{mathbf {S}}$ </tex-math></inline-formula> distributions and that the 95th and 99th percentiles of <inline-formula> <tex-math notation=\"LaTeX\">${H} _{mathbf {S}}$ </tex-math></inline-formula> are strong indicators of the predominant ice type in which the pressure ridge formed. During the first part of the study period <inline-formula> <tex-math notation=\"LaTeX\">${H} _{mathbf {S}}$ </tex-math></inline-formula> increased, with the largest sails observed in the winters of 2015 and 2016, after which <inline-formula> <tex-math notation=\"LaTeX\">${H} _{mathbf {S}}$ </tex-math></inline-formula> declined, suggesting that the most heavily deformed sea ice may have drifted beyond the area surveyed and exited the CA. Our analysis of the interannual and regional variability in sea ice deformation in the western Arctic during the last decade provides an improved understanding of sail height that will help advance ridge parameterizations in sea ice models.","PeriodicalId":13046,"journal":{"name":"IEEE Geoscience and Remote Sensing Letters","volume":"18 1","pages":"1525-1529"},"PeriodicalIF":4.8,"publicationDate":"2019-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/essoar.10500429.1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48221682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}