{"title":"Deep Learning Models to Count Buildings in High-Resolution Overhead Images","authors":"Sylvain Lobry, D. Tuia","doi":"10.1109/JURSE.2019.8809058","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809058","url":null,"abstract":"This paper addresses the problem of counting buildings in very high-resolution overhead true color imagery. We study and discuss the relevance of deep-learning based methods to this task. Two architectures and two loss functions are proposed and compared. We show that a model enforcing equivariance to rotations is beneficial for the task of counting in remotely sensed images. We also highlight the importance of robustness to outliers of the loss function when considering remote sensing applications.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125049891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards combining Satellite Imagery and VGI for Urban LULC classification","authors":"D. Ienco, K. Ose, C. Weber","doi":"10.1109/JURSE.2019.8808966","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808966","url":null,"abstract":"In this work we introduce and evaluate a deep learning model, mbCNN, that combines together satellite imagery and Volunteer Geographical Information (VGI) data to deal with different types of built-up surfaces. Differently from most of the previous works that only consider Urban/Non-Urban settings involving only one urban LULC class, here, we investigate the possibility to go a step further and distinguish among several urban land use classes: residential, industrial, sport fields and non-urban. Experiments on a real-world dataset covering the City of Montpellier (South of France) site are reported. Such results demonstrate the quality of Deep Learning approaches to deal with several types of Urban LULC mapping as well as the positive influence to integrate VGI knowledge in the process.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123995110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Engstrom, R. Harrison, M. Mann, Amanda Fletcher
{"title":"Evaluating the Relationship Between Contextual Features Derived from Very High Spatial Resolution Imagery and Urban Attributes: A Case Study in Sri Lanka","authors":"R. Engstrom, R. Harrison, M. Mann, Amanda Fletcher","doi":"10.1109/JURSE.2019.8809041","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809041","url":null,"abstract":"Extracting information about variations within urban areas using satellite imagery has generally focused on mapping individual buildings or slum versus non-slum areas. While these data are useful, they can run into issues in very dense urban areas, additionally slums have a subjective definition. In previous research we have found that contextual features are related to population, census variables, poverty, and other values, but have not explored which urban attributes (i.e., buildings and roads) these features represent. In this study we seek to determine the correlation between contextual features calculated on Very High Spatial Resolution (VHSR) satellite data and urban attributes derived from Open Street Map (OSM) for portions of multiple cities in Sri Lanka. Results indicate that individual contextual features are highly correlated with building area, building density, road area, road density, total built up areas and other features. Moreover, when multiple contextual features are combined within a model, they can explain from 70 to 92 percent of the variance of these urban features within the study area. This indicates that contextual features are very strong indicators of urban variability and can be used to map differences within the urban setting. This may allow us to forgo having to map each building and road individually for mapping urban areas in future projects.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"312 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116763442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Mapping Of Accessibility Signs With Deep Learning From Ground-level Imagery and Open Data","authors":"A. Nassar, S. Lefèvre","doi":"10.1109/JURSE.2019.8808961","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808961","url":null,"abstract":"In some areas or regions, accessible parking spots are not geolocalized and therefore both difficult to find online and excluded from open data sources. In this paper, we aim at detecting accessible parking signs from street view panoramas and geolocalize them. Object detection is an open challenge in computer vision, and numerous methods exist whether based on handcrafted features or deep learning. Our method consists of processing Google Street View images of French cities in order to geolocalize the accessible parking signs on posts and on the ground where the parking spot is not available on GIS systems. To accomplish this, we rely on the deep learning object detection method called Faster R-CNN with Region Proposal Networks which has proven excellent performance in object detection benchmarks. This helps to map accurate locations of where the parking areas do exist, which can be used to build services or update online mapping services such as Open Street Map. We provide some preliminary results which show the feasibility and relevance of our approach.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130547729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yao Shen, Huanfeng Shen, Qing Cheng, Liwen Huang, Liangpei Zhang
{"title":"Urban Expansion Trajectories in China’s 36 Major Cities","authors":"Yao Shen, Huanfeng Shen, Qing Cheng, Liwen Huang, Liangpei Zhang","doi":"10.1109/JURSE.2019.8808981","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808981","url":null,"abstract":"As the largest developing country, China has experienced dramatic urban sprawl since the \"reform and opening-up\" policy started at the end of the 1970s. To find out the laws of the past urbanization in China is of great importance for promoting a sustainable development in the future. In this paper, we monitor three decades of urban expansion in China’s 36 major cities, based on the spectral mixture analysis of remotely sensed satellite images. The results demonstrate that these major cities have expanded by 5.85 times from 1986 to 2015, with 15.51km2 average expansion area per city per year. We find the urban expansion trajectories show three different modes, i.e., exponential, linear and s-shaped, which are closely related to the city development level. In addition, there is an interesting common tendency of the impervious surface first increasing and then decreasing in the old city zones.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126058739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning geometric soft constraints for multi-view instance matching across street-level panoramas","authors":"A. Nassar, Nico Lang, S. Lefèvre, J. D. Wegner","doi":"10.1109/JURSE.2019.8808935","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808935","url":null,"abstract":"We present a new approach for matching tree instances across multiple street-view panorama images for the ultimate goal of city-scale street-tree mapping with high positioning accuracy. What makes this task challenging is the strong change in view-point, different lighting conditions, high similarity of neighboring trees, and variability in scale. We propose to turn (tree) instance matching into a learning task, where image-appearance and geometric relationships between views fruitfully interact. Our approach constructs a Siamese convolutional neural network that learns to match two views of the same tree given many candidate tree image cut-outs and geographic information of the two panorama images. In addition to image features, we propose utilizing location information about the camera and the tree. Our method is compared to existing patch matching methods to prove its edge over state-of-the-art. This takes us one step closer to the ultimate goal of city-wide tree mapping based solely on panorama imagery to benefit city administration.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129679884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextual Information Based SAR Tomography of Urban Areas","authors":"A. Budillon, A. C. Johnsy, Gilda Schirinzi","doi":"10.1109/JURSE.2019.8809076","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809076","url":null,"abstract":"SAR Tomography (TomoSAR) is a multidimensional imaging technique that has proven its ability in localizing multiple scatterers in the three dimensional observed scene, allowing the reconstruction of the elevation profile of the structures on the ground. Tomographic approaches usually estimate the elevation distribution of the scetterers in each range-azimuth pixel independently from the neighboring ones (local approaches). Then, any relation among the elevations of neighboring pixels is imposed in the tomographic processing. In this paper a local contextual information contained in the data is exploited with the aim of improving the 3D reconstruction (semi-local approaches) and increase the number of reliable reconstructed scatterers in the tomographic scatterers cloud. Results on real data validate the proposed approach.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130692221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving the SLEUTH urban growth model via temporal consistency in urban input data","authors":"Sarochinee Kaewthani, Chaiyapon Keeratikasikorn","doi":"10.1109/JURSE.2019.8809025","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809025","url":null,"abstract":"Changes in an urban growth model were investigated after processing temporal consistency evaluation of classified urban images. A consistency evaluation involving both temporal filtering and heuristic reasoning was then applied to sequence classification of urban maps for further improvement. The SLEUTH urban growth model was tested in regions of uncontrolled urban expansion. The SLEUTH was calibrated using data collected from the major urban area of Nakhon Ratchasima, Thailand in 1989, 1994, 1999 and 2005. The best value of Optimal SLEUTH Metric (OSM) was calculated for urban inputs with and without temporal consistency checking. OSM value higher than without, presenting a better explanation of urban growth in the study area.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131270807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dorothee Stiller, Thomas Stark, M. Wurm, S. Dech, H. Taubenböck
{"title":"Large-scale building extraction in very high-resolution aerial imagery using Mask R-CNN","authors":"Dorothee Stiller, Thomas Stark, M. Wurm, S. Dech, H. Taubenböck","doi":"10.1109/JURSE.2019.8808977","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8808977","url":null,"abstract":"Urban areas are hotspots of complex and dynamic alterations of the Earth’s surface. Using deep learning (DL) techniques in remote sensing applications can significantly contribute to document these tremendous changes. Open source building data at a very high level of detail are still scarce or incomplete for many regions, therefore, hindering research and policy to properly provide knowledge on urban structures. In this study, we use a convolutional neural network to extract buildings for the city of Santiago de Chile. We deploy the recently released Mask R-CNN and use a pretrained model (PM) which already has been trained with remote sensing imagery. We fine-tune PM with very high-resolution (VHR) airborne RGB images from our study region and generate the fine-tuned model (FM). To extend the number of training data, we test several data augmentation methods for training purposes and evaluate their performance in context of urban environments. We achieve highest overall accuracy of 92 % by using augmentations and the generated FM. Our findings encourage to use DL methods in the urban context. The presented method can be adapted and applied to other global urban regions, and, help to overcome lacks in open source building data to assess urban environments.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115339924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Ennafii, C. Mallet, A. L. Bris, Florent Lafarge
{"title":"The necessary yet complex evaluation of 3D city models: a semantic approach","authors":"O. Ennafii, C. Mallet, A. L. Bris, Florent Lafarge","doi":"10.1109/JURSE.2019.8809002","DOIUrl":"https://doi.org/10.1109/JURSE.2019.8809002","url":null,"abstract":"The automatic modeling of urban scenes in 3D from geospatial data has been studied for more than thirty years. However, the output models still have to undergo a tedious task of correction at city scale. In this work, we propose an approach for automatically evaluating the quality of 3D building models. A taxonomy of potential errors is first proposed. Handcrafted features are computed, based on the geometric properties of buildings and, when available, Very High Resolution images and depth data. They are fed into a Random Forest classifier for the prediction of the quality of the models. We tested our framework on three distinct urban areas in France. We can satisfactorily detect, on average 96% of the most frequent errors.","PeriodicalId":299183,"journal":{"name":"2019 Joint Urban Remote Sensing Event (JURSE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120948779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}