{"title":"Sensing overlapping geospatial communities from human movements using graph affiliation generation models","authors":"Peng Luo, Di Zhu","doi":"10.1145/3557918.3565862","DOIUrl":"https://doi.org/10.1145/3557918.3565862","url":null,"abstract":"Geographical units densely connected by human movements can be treated as a geospatial community. Detecting geospatial communities in a mobility network reveals key characteristics of human movements and urban structures. Recent studies have found communities can be overlapping in that one location may belong to multiple communities, posing great challenges to classic disjoint community detection methods that only identify single-affiliation relationships. In this work, we propose a Geospatial Overlapping Community Detection (GOCD) framework based on graph generation models and graph-based deep learning. GOCD aims to detect geographically overlapped communities regarding the multiplex connections underlying human movements, including weak and long-range ties. The detection process is formalized as deriving the optimized probability distribution of geographic units' community affiliations in order to generate the spatial network, i.e., the most reasonable community affiliation matrix given the observed network structure. Further, a graph convolutional network (GCN) is introduced to approach the affiliation probabilities via a deep learning strategy. The GOCD framework outperformed existing baselines on non-spatial benchmark datasets in terms of accuracy and speed. A case study of mobile positioning data in the Twin Cities Metropolitan Area (TCMA), Minnesota, was presented to validate our model on real-world human mobility networks. Our empirical results unveiled the overlapping spatial structures of communities, the overlapping intensity for each CBG, and the spatial heterogeneous structure of community affiliations in the Twin Cities.","PeriodicalId":428859,"journal":{"name":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130121468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IM2City","authors":"Meiliu Wu, Qunying Huang","doi":"10.1145/3557918.3565868","DOIUrl":"https://doi.org/10.1145/3557918.3565868","url":null,"abstract":"This study investigated multi-modal learning as a stand-alone solution to image geo-localization problems. Based on the successful trials on the contrastive language-image pre-training (CLIP) model, we developed GEo-localization Multi-modal (GEM) models, which not only learn the visual features from input images, but also integrate the labels with corresponding geo-location context to generate textual features, which in turn are fused with the visual features for image geo-localization. We demonstrated that simply utilizing the image itself and appropriate contextualized prompts (i.e., mechanisms to integrate labels with geo-location context as textural features) is effective for global image geo-localization, which traditionally requires large amounts of geo-tagged images for image matching. Moreover, due to the integration of natural language, our GEM models are able to learn spatial proximity of geo-contextualized labels (i.e., their spatial closeness), which is often neglected by classification-based geo-localization methods. In particular, the proposed Zero-shot GEM model (i.e., geo-contextualized prompt tuning on CLIP) outperforms the state-of-the-art model - Individual Scene Networks (ISN), obtaining 4.1% and 49.5% accuracy improvements on the two benchmark datasets, IM2GPS3k and Place Plus 2.0 (i.e., 22k street view images across 56 cities worldwide), respectively. In addition, our proposed Linear-probing GEM model (i.e., CLIP's image encoder linearly trained on street view images) outperforms ISN even more significantly, obtaining 16.8% and 71.0% accuracy improvements, respectively. By exploring optimal geographic scales (e.g., city-level vs. country-level), training datasets (street view images vs. random online images), and pre-trained models (e.g., ResNet vs. CLIP for linearly probing), this research sheds light on integrating textural features with visual features for image geo-localization and beyond.","PeriodicalId":428859,"journal":{"name":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133891574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","authors":"","doi":"10.1145/3557918","DOIUrl":"https://doi.org/10.1145/3557918","url":null,"abstract":"","PeriodicalId":428859,"journal":{"name":"Proceedings of the 5th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128977534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}