{"title":"Finding equivalent keys in openstreetmap: semantic similarity computation based on extensional definitions","authors":"I. Majić, S. Winter, Martin Tomko","doi":"10.1145/3149808.3149813","DOIUrl":"https://doi.org/10.1145/3149808.3149813","url":null,"abstract":"Volunteered Geographic Information (VGI) projects, such as Open-StreetMap (OSM) enable the public to contribute to the collection of spatial data. In OSM, users may deviate from spatial feature annotation guidelines and create new tags (i.e. key=value pairs), even if recommended tags exist. This is problematic, as undocumented tags have no set meaning, and they potentially contribute to the dataset heterogeneity and thus reduce usability. This paper proposes an unsupervised approach to identify equivalent documented attribute keys to the used undocumented keys. Based on their extensional definitions through their values, co-occurring keys and geometries of the features they annotate, the semantic similarity of OSM keys is evaluated. The approach has been tested on the OSM dataset for the state of Victoria, Australia. Results have been evaluated against a set of manually detected equivalent keys and show that the method is plausible, but may fail if some assumptions about tag use are not enforced, e.g., semantically unique tags.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127356328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenwen Li, Bin Zhou, Chia-Yu Hsu, Yixing Li, Fengbo Ren
{"title":"Recognizing terrain features on terrestrial surface using a deep learning model: an example with crater detection","authors":"Wenwen Li, Bin Zhou, Chia-Yu Hsu, Yixing Li, Fengbo Ren","doi":"10.1145/3149808.3149814","DOIUrl":"https://doi.org/10.1145/3149808.3149814","url":null,"abstract":"This paper exploits the use of a popular deep learning model - the faster-RCNN - to support automatic terrain feature detection and classification using a mixed set of optimal remote sensing and natural images. Crater detection is used as the case study in this research since this geomorphological feature provides important information about surface aging. Craters, such as impact craters, also effect global changes in many aspects, such as geography, topography, mineral and hydrocarbon production, etc. The collected data were labeled and the network was trained through a GPU server. Experimental results show that the faster-RCNN model coupled with a widely used convolutional network ZF-net performs well in detecting craters on the terrestrial surface.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124329591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiwei Duan, Yao-Yi Chiang, Craig A. Knoblock, Vinil Jain, D. Feldman, Johannes H. Uhl, S. Leyk
{"title":"Automatic alignment of geographic features in contemporary vector data and historical maps","authors":"Weiwei Duan, Yao-Yi Chiang, Craig A. Knoblock, Vinil Jain, D. Feldman, Johannes H. Uhl, S. Leyk","doi":"10.1145/3149808.3149816","DOIUrl":"https://doi.org/10.1145/3149808.3149816","url":null,"abstract":"With large amounts of digital map archives becoming available, the capability to automatically extracting information from historical maps is important for many domains that require long-term geographic data, such as understanding the development of the landscape and human activities. In the previous work, we built a system to automatically recognize geographic features in historical maps using Convolutional Neural Networks (CNN). Our system uses contemporary vector data to automatically label examples of the geographic feature of interest in historical maps as training samples for the CNN model. The alignment between the vector data and geographic features in maps controls if the system can generate representative training samples, which has a significant impact on recognition performance of the system. Due to the large number of training data that the CNN model needs and tens of thousands of maps needed to be processed in an archive, manually aligning the vector data to each map in an archive is not practical. In this paper, we present an algorithm that automatically aligns vector data with geographic features in historical maps. Existing alignment approaches focus on road features and imagery and are difficult to generalize for other geographic features. Our algorithm aligns various types of geographic features in document images with the corresponding vector data. In the experiment, our alignment algorithm increased the correctness and completeness of the extracted railroad and river vector data for about 100% and 20%, respectively. For the performance of feature recognition, the aligned vector data had a 100% improvement on the precision while maintained a similar recall.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116777744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing Li, Jiasong Zhu, Tao Liu, J. Garibaldi, Qingquan Li, G. Qiu
{"title":"Visual landmark sequence-based indoor localization","authors":"Qing Li, Jiasong Zhu, Tao Liu, J. Garibaldi, Qingquan Li, G. Qiu","doi":"10.1145/3149808.3149812","DOIUrl":"https://doi.org/10.1145/3149808.3149812","url":null,"abstract":"This paper presents a method that uses common objects as landmarks for smartphone-based indoor localization and navigation. First, a topological map marking relative positions of common objects such as doors, stairs and toilets is generated from floor plan. Second, a computer vision technique employing the latest deep learning technology has been developed for detecting common indoor objects from videos captured by smartphone. Third, second order Hidden Markov model is applied to match detected indoor landmark sequence to topological map. We use videos captured by users holding smartphones and walking through corridors of an office building to evaluate our method. The experiment shows that computer vision technique is able to accurately and reliably detect 10 classes of common indoor objects and that second order hidden Markov model can reliably match the detected landmark sequence with the topological map. This work demonstrates that computer vision and machine learning techniques can play a very useful role in developing smartphone-based indoor positioning applications.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132771807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Collins, J. M. Beck, S. Bridges, J. Rushing, S. Graves
{"title":"Deep learning for multisensor image resolution enhancement","authors":"C. Collins, J. M. Beck, S. Bridges, J. Rushing, S. Graves","doi":"10.1145/3149808.3149815","DOIUrl":"https://doi.org/10.1145/3149808.3149815","url":null,"abstract":"We describe a deep learning convolutional neural network (CNN) for enhancing low resolution multispectral satellite imagery without the use of a panchromatic image. For training, low resolution images are used as input and corresponding high resolution images are used as the target output (label). The CNN learns to automatically extract hierarchical features that can be used to enhance low resolution imagery. The trained network can then be effectively used for super-resolution enhancement of low resolution multispectral images where no corresponding high resolution image is available. The CNN enhances all four spectral bands of the low resolution image simultaneously and adjusts pixel values of the low resolution to match the dynamic range of the high resolution image. The CNN yields higher quality images than standard image resampling methods.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122370052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-based classification of GPS noise level using convolutional neural networks for accurate distance estimation","authors":"James Murphy, Yuanyuan Pao, Asif-ul Haque","doi":"10.1145/3149808.3149811","DOIUrl":"https://doi.org/10.1145/3149808.3149811","url":null,"abstract":"Accurate route prediction and distance calculation is an integral part of processing GPS data, particularly in the ride-sharing industry. One common approach has been to map match GPS data to estimate driving traces under noise and sparsity conditions. However, map-matched traces have proven to be at most as good as the underlying map data. Incorrect or missing map data can lead to large, improbable deviations, even when the geometry of the underlying raw GPS data is within tolerance of the actual route. Ideally, we want to take advantage of both the map-matched route and the GPS data to minimize the distance error. Therefore, we propose a method to classify the noise level (or trustworthiness) of small sub-sections of the input data on any given route to conditionally select between using the raw GPS data and the map-matched route as the best estimate of the driving path. For the classifier, each section is treated as an image matrix and is fed through a convolutional neural network trained only on a large amount of synthetic data. The result is a classifier that achieves human-level performance and can be used in a real-time system to reduce distance errors between the predicted and ground-truth traces of actual ride data.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"227 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116370774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating synthetic mobility traffic using RNNs","authors":"Vaibhav Kulkarni, B. Garbinato","doi":"10.1145/3149808.3149809","DOIUrl":"https://doi.org/10.1145/3149808.3149809","url":null,"abstract":"Mobility trajectory datasets are fundamental for system evaluation and experimental reproducibility. Privacy concerns today however, have restricted sharing of such datasets. This has led to the development of synthetic traffic generators, which simulate moving entities to create pseudo-realistic trajectory datasets. Existing work on traffic generation, superficially matches a-priori modeled mobility characteristics, which lacks realism and does not capture the substantive properties of human mobility. Critical applications however, require data that contains these complex, candid and hidden mobility patterns. To this end, we investigate the effectiveness of Recurrent Neural Networks (RNN) to learn these hidden patterns contained in an original dataset to produce a realistic synthetic dataset. We observe that, the ability of RNNs to learn and model problems over sequential data having long-term temporal dependencies is ideal for capturing the inherent properties of location traces. Additionally, the lack of intuitive high-level spatiotemporal structure and instability, guarantees trajectories that are different from the ones seen in the training dataset. Our preliminary evaluation results show that, our model effectively captures the sleep cycles and stay-points commonly observed in the considered training dataset, along with preserving the statistical characteristics and probability distributions of the movement transitions. Although, many questions remain to be answered, we show that generating synthetic traffic by learning the innate structure of human mobility through RNNs is a promising approach.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120966962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An application of convolutional neural network in street image classification: the case study of london","authors":"S. Law, Yao Shen, C. Seresinhe","doi":"10.1145/3149808.3149810","DOIUrl":"https://doi.org/10.1145/3149808.3149810","url":null,"abstract":"Street frontage quality is an important element in urban design as it contributes to the interest, social life and success of public spaces. To collect the data needed to evaluate street frontage quality at the city or regional level using traditional survey method is both costly and time consuming. As a result, this research proposes a pipeline that uses convolutional neural network to classify the frontage of a street image through the case study of Greater London. A novelty of the research is it uses both Google streetview images and 3D-model generated streetview images for the classification. The benefit of this approach is that it can provide a framework to test different urban parameters to help evaluate future urban design projects. The research finds encouraging results in classifying urban frontage quality using deep learning models. This research also finds that augmenting the baseline model with images produced from a 3D-model can improve slightly the accuracy of the results. However these results should be taken as preliminary, where we acknowledge several limitations such as the lack of adversarial analysis, labeled data, or parameter tuning. Despite these limitations, the results of the proof-of-concept study is positive and carries great potential in the application of urban data analytics.","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132928363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","authors":"","doi":"10.1145/3149808","DOIUrl":"https://doi.org/10.1145/3149808","url":null,"abstract":"","PeriodicalId":158183,"journal":{"name":"Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125879717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}