2019 Joint Urban Remote Sensing Event (JURSE)最新文献

筛选
英文 中文
Building Instance Classification using Social Media Images 使用社交媒体图像构建实例分类
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809056
E. J. Hoffmann, M. Werner, Xiaoxiang Zhu
{"title":"Building Instance Classification using Social Media Images","authors":"E. J. Hoffmann, M. Werner, Xiaoxiang Zhu","doi":"10.1109/JURSE.2019.8809056","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809056","url":null,"abstract":"Understanding urbanization and planning for the upcoming changes require detailed knowledge about the places where people live and work. Thus, knowing the usage of buildings is inevitable to distinguish between residential and commercial places. Assessing the usage of buildings from an aerial perspective alone is challenging and results in unresolveable ambiguities.As complementary data sources, social media images taken from ground level allow access to the building façades, as well as ongoing social activities around the buildings, which are very valuable information while coming to accessing the building usages. Towards the fusion of social media images and remote sensing data for this purpose, in this work we present a method to assess building usages from social media images taken in their neighborhood. Using a straight forward next neighbor classifier for mapping images to buildings and pre-trained networks for dimensionality reduction we trained a logistic regression classifier to distinguish between five different building usage classes. Applied to a study area of Los Angeles metropolitan area, USA, we obtain an average precision of 0.67. Hence, we show that social media images can be a valuable additional source to remote sensing data.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131602296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
ES-CNN: An End-to-End Siamese Convolutional Neural Network for Hyperspectral Image Classification ES-CNN:用于高光谱图像分类的端到端暹罗卷积神经网络
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808991
M. Rao, L. Tang, Ping Tang, Zheng Zhang
{"title":"ES-CNN: An End-to-End Siamese Convolutional Neural Network for Hyperspectral Image Classification","authors":"M. Rao, L. Tang, Ping Tang, Zheng Zhang","doi":"10.1109/JURSE.2019.8808991","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808991","url":null,"abstract":"In recent years, deep learning-based methods have achieved great success in remote sensing image analysis. However, especially in the context of hyperspectral image classification, there is still a lack of labelled samples to feed those data-hungry deep models. To augment the amount of input data, models operate on pixel-pairs have been proposed and Siamese convolutional neural network (S-CNN) is a typical one. S-CNN is used as a pixel-pair feature extractor and an additional classifier like SVM is required. In this paper, we propose an end-to-end version of S-CNN. Taking advantage of the pairwise input and to make better use of spatial information, a voting strategy using neighbouring pixels is also employed to determine the final class label of the centre pixel. Experimental results on real hyperspectral datasets show that the proposed method outperforms the original S-CNN by a considerable margin.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128985977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
What Data are needed for Semantic Segmentation in Earth Observation? 对地观测语义分割需要哪些数据?
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809071
J. Castillo-Navarro, N. Audebert, Alexandre Boulch, B. L. Saux, S. Lefèvre
{"title":"What Data are needed for Semantic Segmentation in Earth Observation?","authors":"J. Castillo-Navarro, N. Audebert, Alexandre Boulch, B. L. Saux, S. Lefèvre","doi":"10.1109/JURSE.2019.8809071","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809071","url":null,"abstract":"This paper explores different aspects of semantic segmentation of remote sensing data using deep neural networks. Learning with deep neural networks was revolutionized by the creation of ImageNet. Remote sensing benefited of these new techniques, however Earth Observation (EO) datasets remain small in comparison. In this work, we investigate how we can progress towards the ImageNet of remote sensing. In particular, two questions are addressed in this paper. First, how robust are existing supervised learning strategies with respect to data volume? Second, which properties are expected from a large-scale EO dataset? The main contributions of this work are: (i) a strong robustness analysis of existing supervised learning strategies with respect to remote sensing data, (ii) the introduction of a new, large-scale dataset named MiniFrance.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123394046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Urban Scene Labeling Based on Multi-Modal Data Acquired from Aerial Sensor Platforms 基于航空传感器平台多模态数据的城市场景标注
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809035
M. Weinmann, Michael Weinmann
{"title":"Urban Scene Labeling Based on Multi-Modal Data Acquired from Aerial Sensor Platforms","authors":"M. Weinmann, Michael Weinmann","doi":"10.1109/JURSE.2019.8809035","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809035","url":null,"abstract":"In this paper, we address urban scene interpretation on the basis of multi-modal data acquired from aerial sensor platforms. These data comprise RGB color information, hyperspectral information and 3D shape information. As hyperspectral data are known to contain a high degree of redundancy which, in turn, might affect the quality of derived classification results, we also involve techniques for dimensionality reduction and feature selection as well as a transformation of hyperspectral data to high-resolution multispectral Sentinel-2-like data. We use the different types of data to define sets of radiometric and geometric features which are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different types of data and their combination for urban scene interpretation, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125541476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Large-Scale Urban Mapping using Small Stack Multi-baseline TanDEM-X Interferograms 基于小叠多基线TanDEM-X干涉图的大尺度城市制图
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808986
Yilei Shi, Yuanyuan Wang, Xiaoxiang Zhu, R. Bamler
{"title":"Large-Scale Urban Mapping using Small Stack Multi-baseline TanDEM-X Interferograms","authors":"Yilei Shi, Yuanyuan Wang, Xiaoxiang Zhu, R. Bamler","doi":"10.1109/JURSE.2019.8808986","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808986","url":null,"abstract":"Multi-baseline synthetic aperture radar (SAR) interferometric techniques, such as SAR tomography, is well established for 3-D reconstruction in the urban area. These methods usually require fairly large interferometric stacks (> 20 images) for a reliable reconstruction. They are not directly applicable to SAR interferometric (InSAR) stack with only a few acquisitions, as the extremely small number of acquisitions can severely bias the estimates from the spectral estimators, such as beamforming which is often only asymptotically optimal. In addition, the number of images also causes severe ambiguity issue of the pixel with low signal-to-noise ratio. In this work, we propose a new processing framework of 3-D reconstruction with TomoSAR using extremely small stacks. Moreover, the applicability of the algorithm is demonstrated by exploiting TanDEM-X co-registered phase preserving single look slant range complex SAR images (CoSSC) over a large-scale test site of the whole Munich city, Germany. The reconstructed results have been systematically compared with global production of TanDEM-X digital elevation models (DEM) and LiDAR dataset, which show the potential of high quality large-scale 3-D urban mapping.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115008659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Weakly Supervised Semantic Segmentation of Satellite Images 卫星图像的弱监督语义分割
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-04-08 DOI: 10.1109/JURSE.2019.8809060
A. Nivaggioli, Hicham Randrianarivo
{"title":"Weakly Supervised Semantic Segmentation of Satellite Images","authors":"A. Nivaggioli, Hicham Randrianarivo","doi":"10.1109/JURSE.2019.8809060","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809060","url":null,"abstract":"When one wants to train a neural network to perform semantic segmentation, creating pixel-level annotations for each of the images in the database is a tedious task. If he works with aerial or satellite images, which are usually very large, it is even worse. With that in mind, we investigate how to use image-level annotations in order to perform semantic segmentation. Image-level annotations are much less expensive to acquire than pixel-level annotations, but we lose a lot of information for the training of the model. From the annotations of the images, the model must find by itself how to classify the different regions of the image. In this work, we use the method proposed by Anh and Kwak [1] to produce pixel-level annotation from image level annotation.We compare the overall quality of our generated dataset with the original dataset.In addition, we propose an adaptation of the AffinityNet that allows us to directly perform a semantic segmentation.Our results show that the generated labels lead to the same performances for the training of several segmentation networks. Also, the quality of semantic segmentation performed directly by the AffinityNet and the Random Walk is close to the one of the best fully-supervised approaches.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116635460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书