A Deep-Learning-Based Multimodal Data Fusion Framework for Urban Region Function Recognition

Mingyang Yu, Haiqing Xu, Fangliang Zhou, Shuai Xu, Hongling Yin
{"title":"A Deep-Learning-Based Multimodal Data Fusion Framework for Urban Region Function Recognition","authors":"Mingyang Yu, Haiqing Xu, Fangliang Zhou, Shuai Xu, Hongling Yin","doi":"10.3390/ijgi12120468","DOIUrl":null,"url":null,"abstract":"Accurate and efficient classification maps of urban functional zones (UFZs) are crucial to urban planning, management, and decision making. Due to the complex socioeconomic UFZ properties, it is increasingly challenging to identify urban functional zones by using remote-sensing images (RSIs) alone. Point-of-interest (POI) data and remote-sensing image data play important roles in UFZ extraction. However, many existing methods only use a single type of data or simply combine the two, failing to take full advantage of the complementary advantages between them. Therefore, we designed a deep-learning framework that integrates the above two types of data to identify urban functional areas. In the first part of the complementary feature-learning and fusion module, we use a convolutional neural network (CNN) to extract visual features and social features. Specifically, we extract visual features from RSI data, while POI data are converted into a distance heatmap tensor that is input into the CNN with gated attention mechanisms to extract social features. Then, we use a feature fusion module (FFM) with adaptive weights to fuse the two types of features. The second part is the spatial-relationship-modeling module. We designed a new spatial-relationship-learning network based on a vision transformer model with long- and short-distance attention, which can simultaneously learn the global and local spatial relationships of the urban functional zones. Finally, a feature aggregation module (FGM) utilizes the two spatial relationships efficiently. The experimental results show that the proposed model can fully extract visual features, social features, and spatial relationship features from RSIs and POIs for more accurate UFZ recognition.","PeriodicalId":14614,"journal":{"name":"ISPRS Int. J. Geo Inf.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Int. J. Geo Inf.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/ijgi12120468","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate and efficient classification maps of urban functional zones (UFZs) are crucial to urban planning, management, and decision making. Due to the complex socioeconomic UFZ properties, it is increasingly challenging to identify urban functional zones by using remote-sensing images (RSIs) alone. Point-of-interest (POI) data and remote-sensing image data play important roles in UFZ extraction. However, many existing methods only use a single type of data or simply combine the two, failing to take full advantage of the complementary advantages between them. Therefore, we designed a deep-learning framework that integrates the above two types of data to identify urban functional areas. In the first part of the complementary feature-learning and fusion module, we use a convolutional neural network (CNN) to extract visual features and social features. Specifically, we extract visual features from RSI data, while POI data are converted into a distance heatmap tensor that is input into the CNN with gated attention mechanisms to extract social features. Then, we use a feature fusion module (FFM) with adaptive weights to fuse the two types of features. The second part is the spatial-relationship-modeling module. We designed a new spatial-relationship-learning network based on a vision transformer model with long- and short-distance attention, which can simultaneously learn the global and local spatial relationships of the urban functional zones. Finally, a feature aggregation module (FGM) utilizes the two spatial relationships efficiently. The experimental results show that the proposed model can fully extract visual features, social features, and spatial relationship features from RSIs and POIs for more accurate UFZ recognition.
基于深度学习的城市区域功能识别多模态数据融合框架
准确、高效的城市功能区(UFZ)分类图对于城市规划、管理和决策至关重要。由于城市功能区具有复杂的社会经济属性,仅使用遥感图像(RSI)来识别城市功能区越来越具有挑战性。兴趣点(POI)数据和遥感图像数据在 UFZ 提取中发挥着重要作用。然而,现有的许多方法只使用单一类型的数据或简单地将两者结合起来,未能充分利用两者之间的互补优势。因此,我们设计了一种深度学习框架,将上述两类数据整合在一起,以识别城市功能区。在互补特征学习与融合模块的第一部分,我们使用卷积神经网络(CNN)来提取视觉特征和社会特征。具体来说,我们从 RSI 数据中提取视觉特征,同时将 POI 数据转换为距离热图张量,并将其输入具有门控注意机制的 CNN,以提取社会特征。然后,我们使用具有自适应权重的特征融合模块(FFM)来融合这两类特征。第二部分是空间关系建模模块。我们设计了一种新的空间关系学习网络,该网络基于视觉转换器模型,具有远距离和近距离注意力,可以同时学习城市功能区的全局和局部空间关系。最后,一个特征聚合模块(FGM)有效地利用了这两种空间关系。实验结果表明,所提出的模型可以从 RSI 和 POI 中充分提取视觉特征、社会特征和空间关系特征,从而实现更准确的 UFZ 识别。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信