Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery最新文献

筛选
英文 中文
Sensing overlapping geospatial communities from human movements using graph affiliation generation models 利用图形关联生成模型感知人类运动中重叠的地理空间社区
Peng Luo, Di Zhu
{"title":"Sensing overlapping geospatial communities from human movements using graph affiliation generation models","authors":"Peng Luo, Di Zhu","doi":"10.1145/3557918.3565862","DOIUrl":"https://doi.org/10.1145/3557918.3565862","url":null,"abstract":"Geographical units densely connected by human movements can be treated as a geospatial community. Detecting geospatial communities in a mobility network reveals key characteristics of human movements and urban structures. Recent studies have found communities can be overlapping in that one location may belong to multiple communities, posing great challenges to classic disjoint community detection methods that only identify single-affiliation relationships. In this work, we propose a Geospatial Overlapping Community Detection (GOCD) framework based on graph generation models and graph-based deep learning. GOCD aims to detect geographically overlapped communities regarding the multiplex connections underlying human movements, including weak and long-range ties. The detection process is formalized as deriving the optimized probability distribution of geographic units' community affiliations in order to generate the spatial network, i.e., the most reasonable community affiliation matrix given the observed network structure. Further, a graph convolutional network (GCN) is introduced to approach the affiliation probabilities via a deep learning strategy. The GOCD framework outperformed existing baselines on non-spatial benchmark datasets in terms of accuracy and speed. A case study of mobile positioning data in the Twin Cities Metropolitan Area (TCMA), Minnesota, was presented to validate our model on real-world human mobility networks. Our empirical results unveiled the overlapping spatial structures of communities, the overlapping intensity for each CBG, and the spatial heterogeneous structure of community affiliations in the Twin Cities.","PeriodicalId":428859,"journal":{"name":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130121468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IM2City IM2C City
Meiliu Wu, Qunying Huang
{"title":"IM2City","authors":"Meiliu Wu, Qunying Huang","doi":"10.1145/3557918.3565868","DOIUrl":"https://doi.org/10.1145/3557918.3565868","url":null,"abstract":"This study investigated multi-modal learning as a stand-alone solution to image geo-localization problems. Based on the successful trials on the contrastive language-image pre-training (CLIP) model, we developed GEo-localization Multi-modal (GEM) models, which not only learn the visual features from input images, but also integrate the labels with corresponding geo-location context to generate textual features, which in turn are fused with the visual features for image geo-localization. We demonstrated that simply utilizing the image itself and appropriate contextualized prompts (i.e., mechanisms to integrate labels with geo-location context as textural features) is effective for global image geo-localization, which traditionally requires large amounts of geo-tagged images for image matching. Moreover, due to the integration of natural language, our GEM models are able to learn spatial proximity of geo-contextualized labels (i.e., their spatial closeness), which is often neglected by classification-based geo-localization methods. In particular, the proposed Zero-shot GEM model (i.e., geo-contextualized prompt tuning on CLIP) outperforms the state-of-the-art model - Individual Scene Networks (ISN), obtaining 4.1% and 49.5% accuracy improvements on the two benchmark datasets, IM2GPS3k and Place Plus 2.0 (i.e., 22k street view images across 56 cities worldwide), respectively. In addition, our proposed Linear-probing GEM model (i.e., CLIP's image encoder linearly trained on street view images) outperforms ISN even more significantly, obtaining 16.8% and 71.0% accuracy improvements, respectively. By exploring optimal geographic scales (e.g., city-level vs. country-level), training datasets (street view images vs. random online images), and pre-trained models (e.g., ResNet vs. CLIP for linearly probing), this research sheds light on integrating textural features with visual features for image geo-localization and beyond.","PeriodicalId":428859,"journal":{"name":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133891574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery 第五届ACM sigspace国际地理知识发现人工智能研讨会论文集
{"title":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","authors":"","doi":"10.1145/3557918","DOIUrl":"https://doi.org/10.1145/3557918","url":null,"abstract":"","PeriodicalId":428859,"journal":{"name":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128977534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信