2019 Joint Urban Remote Sensing Event (JURSE)最新文献

筛选
英文 中文
Deep Learning Models to Count Buildings in High-Resolution Overhead Images 在高分辨率头顶图像中计算建筑物的深度学习模型
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809058
Sylvain Lobry, D. Tuia
{"title":"Deep Learning Models to Count Buildings in High-Resolution Overhead Images","authors":"Sylvain Lobry, D. Tuia","doi":"10.1109/JURSE.2019.8809058","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809058","url":null,"abstract":"This paper addresses the problem of counting buildings in very high-resolution overhead true color imagery. We study and discuss the relevance of deep-learning based methods to this task. Two architectures and two loss functions are proposed and compared. We show that a model enforcing equivariance to rotations is beneficial for the task of counting in remotely sensed images. We also highlight the importance of robustness to outliers of the loss function when considering remote sensing applications.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125049891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards combining Satellite Imagery and VGI for Urban LULC classification 卫星影像与VGI相结合的城市LULC分类研究
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808966
D. Ienco, K. Ose, C. Weber
{"title":"Towards combining Satellite Imagery and VGI for Urban LULC classification","authors":"D. Ienco, K. Ose, C. Weber","doi":"10.1109/JURSE.2019.8808966","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808966","url":null,"abstract":"In this work we introduce and evaluate a deep learning model, mbCNN, that combines together satellite imagery and Volunteer Geographical Information (VGI) data to deal with different types of built-up surfaces. Differently from most of the previous works that only consider Urban/Non-Urban settings involving only one urban LULC class, here, we investigate the possibility to go a step further and distinguish among several urban land use classes: residential, industrial, sport fields and non-urban. Experiments on a real-world dataset covering the City of Montpellier (South of France) site are reported. Such results demonstrate the quality of Deep Learning approaches to deal with several types of Urban LULC mapping as well as the positive influence to integrate VGI knowledge in the process.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123995110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluating the Relationship Between Contextual Features Derived from Very High Spatial Resolution Imagery and Urban Attributes: A Case Study in Sri Lanka 评价高空间分辨率影像的背景特征与城市属性之间的关系:以斯里兰卡为例
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809041
R. Engstrom, R. Harrison, M. Mann, Amanda Fletcher
{"title":"Evaluating the Relationship Between Contextual Features Derived from Very High Spatial Resolution Imagery and Urban Attributes: A Case Study in Sri Lanka","authors":"R. Engstrom, R. Harrison, M. Mann, Amanda Fletcher","doi":"10.1109/JURSE.2019.8809041","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809041","url":null,"abstract":"Extracting information about variations within urban areas using satellite imagery has generally focused on mapping individual buildings or slum versus non-slum areas. While these data are useful, they can run into issues in very dense urban areas, additionally slums have a subjective definition. In previous research we have found that contextual features are related to population, census variables, poverty, and other values, but have not explored which urban attributes (i.e., buildings and roads) these features represent. In this study we seek to determine the correlation between contextual features calculated on Very High Spatial Resolution (VHSR) satellite data and urban attributes derived from Open Street Map (OSM) for portions of multiple cities in Sri Lanka. Results indicate that individual contextual features are highly correlated with building area, building density, road area, road density, total built up areas and other features. Moreover, when multiple contextual features are combined within a model, they can explain from 70 to 92 percent of the variance of these urban features within the study area. This indicates that contextual features are very strong indicators of urban variability and can be used to map differences within the urban setting. This may allow us to forgo having to map each building and road individually for mapping urban areas in future projects.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116763442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automated Mapping Of Accessibility Signs With Deep Learning From Ground-level Imagery and Open Data 基于地面图像和开放数据的深度学习的无障碍标志自动映射
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808961
A. Nassar, S. Lefèvre
{"title":"Automated Mapping Of Accessibility Signs With Deep Learning From Ground-level Imagery and Open Data","authors":"A. Nassar, S. Lefèvre","doi":"10.1109/JURSE.2019.8808961","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808961","url":null,"abstract":"In some areas or regions, accessible parking spots are not geolocalized and therefore both difficult to find online and excluded from open data sources. In this paper, we aim at detecting accessible parking signs from street view panoramas and geolocalize them. Object detection is an open challenge in computer vision, and numerous methods exist whether based on handcrafted features or deep learning. Our method consists of processing Google Street View images of French cities in order to geolocalize the accessible parking signs on posts and on the ground where the parking spot is not available on GIS systems. To accomplish this, we rely on the deep learning object detection method called Faster R-CNN with Region Proposal Networks which has proven excellent performance in object detection benchmarks. This helps to map accurate locations of where the parking areas do exist, which can be used to build services or update online mapping services such as Open Street Map. We provide some preliminary results which show the feasibility and relevance of our approach.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130547729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Urban Expansion Trajectories in China’s 36 Major Cities 中国36个主要城市的城市扩张轨迹
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808981
Yao Shen, Huanfeng Shen, Qing Cheng, Liwen Huang, Liangpei Zhang
{"title":"Urban Expansion Trajectories in China’s 36 Major Cities","authors":"Yao Shen, Huanfeng Shen, Qing Cheng, Liwen Huang, Liangpei Zhang","doi":"10.1109/JURSE.2019.8808981","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808981","url":null,"abstract":"As the largest developing country, China has experienced dramatic urban sprawl since the \"reform and opening-up\" policy started at the end of the 1970s. To find out the laws of the past urbanization in China is of great importance for promoting a sustainable development in the future. In this paper, we monitor three decades of urban expansion in China’s 36 major cities, based on the spectral mixture analysis of remotely sensed satellite images. The results demonstrate that these major cities have expanded by 5.85 times from 1986 to 2015, with 15.51km2 average expansion area per city per year. We find the urban expansion trajectories show three different modes, i.e., exponential, linear and s-shaped, which are closely related to the city development level. In addition, there is an interesting common tendency of the impervious surface first increasing and then decreasing in the old city zones.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126058739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning geometric soft constraints for multi-view instance matching across street-level panoramas 学习几何软约束的多视图实例匹配跨街道级全景
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808935
A. Nassar, Nico Lang, S. Lefèvre, J. D. Wegner
{"title":"Learning geometric soft constraints for multi-view instance matching across street-level panoramas","authors":"A. Nassar, Nico Lang, S. Lefèvre, J. D. Wegner","doi":"10.1109/JURSE.2019.8808935","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808935","url":null,"abstract":"We present a new approach for matching tree instances across multiple street-view panorama images for the ultimate goal of city-scale street-tree mapping with high positioning accuracy. What makes this task challenging is the strong change in view-point, different lighting conditions, high similarity of neighboring trees, and variability in scale. We propose to turn (tree) instance matching into a learning task, where image-appearance and geometric relationships between views fruitfully interact. Our approach constructs a Siamese convolutional neural network that learns to match two views of the same tree given many candidate tree image cut-outs and geographic information of the two panorama images. In addition to image features, we propose utilizing location information about the camera and the tree. Our method is compared to existing patch matching methods to prove its edge over state-of-the-art. This takes us one step closer to the ultimate goal of city-wide tree mapping based solely on panorama imagery to benefit city administration.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129679884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Contextual Information Based SAR Tomography of Urban Areas 基于上下文信息的城市区域SAR层析成像
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809076
A. Budillon, A. C. Johnsy, Gilda Schirinzi
{"title":"Contextual Information Based SAR Tomography of Urban Areas","authors":"A. Budillon, A. C. Johnsy, Gilda Schirinzi","doi":"10.1109/JURSE.2019.8809076","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809076","url":null,"abstract":"SAR Tomography (TomoSAR) is a multidimensional imaging technique that has proven its ability in localizing multiple scatterers in the three dimensional observed scene, allowing the reconstruction of the elevation profile of the structures on the ground. Tomographic approaches usually estimate the elevation distribution of the scetterers in each range-azimuth pixel independently from the neighboring ones (local approaches). Then, any relation among the elevations of neighboring pixels is imposed in the tomographic processing. In this paper a local contextual information contained in the data is exploited with the aim of improving the 3D reconstruction (semi-local approaches) and increase the number of reliable reconstructed scatterers in the tomographic scatterers cloud. Results on real data validate the proposed approach.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130692221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Improving the SLEUTH urban growth model via temporal consistency in urban input data 基于城市输入数据时间一致性的SLEUTH城市增长模型改进
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809025
Sarochinee Kaewthani, Chaiyapon Keeratikasikorn
{"title":"Improving the SLEUTH urban growth model via temporal consistency in urban input data","authors":"Sarochinee Kaewthani, Chaiyapon Keeratikasikorn","doi":"10.1109/JURSE.2019.8809025","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809025","url":null,"abstract":"Changes in an urban growth model were investigated after processing temporal consistency evaluation of classified urban images. A consistency evaluation involving both temporal filtering and heuristic reasoning was then applied to sequence classification of urban maps for further improvement. The SLEUTH urban growth model was tested in regions of uncontrolled urban expansion. The SLEUTH was calibrated using data collected from the major urban area of Nakhon Ratchasima, Thailand in 1989, 1994, 1999 and 2005. The best value of Optimal SLEUTH Metric (OSM) was calculated for urban inputs with and without temporal consistency checking. OSM value higher than without, presenting a better explanation of urban growth in the study area.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131270807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-scale building extraction in very high-resolution aerial imagery using Mask R-CNN 大规模建筑提取在非常高分辨率的航空图像使用掩模R-CNN
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8808977
Dorothee Stiller, Thomas Stark, M. Wurm, S. Dech, H. Taubenböck
{"title":"Large-scale building extraction in very high-resolution aerial imagery using Mask R-CNN","authors":"Dorothee Stiller, Thomas Stark, M. Wurm, S. Dech, H. Taubenböck","doi":"10.1109/JURSE.2019.8808977","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808977","url":null,"abstract":"Urban areas are hotspots of complex and dynamic alterations of the Earth’s surface. Using deep learning (DL) techniques in remote sensing applications can significantly contribute to document these tremendous changes. Open source building data at a very high level of detail are still scarce or incomplete for many regions, therefore, hindering research and policy to properly provide knowledge on urban structures. In this study, we use a convolutional neural network to extract buildings for the city of Santiago de Chile. We deploy the recently released Mask R-CNN and use a pretrained model (PM) which already has been trained with remote sensing imagery. We fine-tune PM with very high-resolution (VHR) airborne RGB images from our study region and generate the fine-tuned model (FM). To extend the number of training data, we test several data augmentation methods for training purposes and evaluate their performance in context of urban environments. We achieve highest overall accuracy of 92 % by using augmentations and the generated FM. Our findings encourage to use DL methods in the urban context. The presented method can be adapted and applied to other global urban regions, and, help to overcome lacks in open source building data to assess urban environments.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115339924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The necessary yet complex evaluation of 3D city models: a semantic approach 三维城市模型的必要而复杂的评估:一种语义方法
2019 Joint Urban Remote Sensing Event (JURSE) Pub Date : 2019-05-01 DOI: 10.1109/JURSE.2019.8809002
O. Ennafii, C. Mallet, A. L. Bris, Florent Lafarge
{"title":"The necessary yet complex evaluation of 3D city models: a semantic approach","authors":"O. Ennafii, C. Mallet, A. L. Bris, Florent Lafarge","doi":"10.1109/JURSE.2019.8809002","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809002","url":null,"abstract":"The automatic modeling of urban scenes in 3D from geospatial data has been studied for more than thirty years. However, the output models still have to undergo a tedious task of correction at city scale. In this work, we propose an approach for automatically evaluating the quality of 3D building models. A taxonomy of potential errors is first proposed. Handcrafted features are computed, based on the geometric properties of buildings and, when available, Very High Resolution images and depth data. They are fed into a Random Forest classifier for the prediction of the quality of the models. We tested our framework on three distinct urban areas in France. We can satisfactorily detect, on average 96% of the most frequent errors.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120948779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书