Global urban high-resolution scene classification via uncertainty-aware domain generalization

IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Jingjun Yi , Yanfei Zhong , Yu Su , Ruiyi Yang , Yinhe Liu , Junjue Wang
{"title":"Global urban high-resolution scene classification via uncertainty-aware domain generalization","authors":"Jingjun Yi ,&nbsp;Yanfei Zhong ,&nbsp;Yu Su ,&nbsp;Ruiyi Yang ,&nbsp;Yinhe Liu ,&nbsp;Junjue Wang","doi":"10.1016/j.isprsjprs.2025.08.027","DOIUrl":null,"url":null,"abstract":"<div><div>Global urban scene classification is a crucial technology for global land use mapping, holding significant importance in driving urban intelligence forward. When applying datasets constructed from urban scenes on a global scale, there are two serious problems. Due to cultural, economic, and other factors, style differences exist in scenes across different cities, posing challenges for model generalization. Additionally, urban scene samples often follows a long-tailed distribution, complicating the identification of tail categories with small sample volumes and impairing performance under domain generalization settings. To tackle these problems, the Uncertainty-aware Domain Generalization urban scene classification (UADG) framework is constructed. For mitigating city-related style difference among global cities, a city-related whitening is proposed, utilizing whitening operations to separate city unrelated content features and adaptively preserving city-related information hidden in style features, rather than directly removing style information, thus aiding in more robust representations. To tackle the phenomenon of significant accuracy decline in tail classes during domain generalization, estimated uncertainty is utilized to guide the mixture of experts, and reasonable expert assignment is conducted for hard samples to balance the model bias. To evaluate the proposed UADG framework under practical scenario, the Domain Generalized Urban Scene (DGUS) dataset is curated for validation, with a training set comprising 42 classes of samples from 34 provincial capitals in China, and test samples selected from representative cities across six continents. Extensive experiments have demonstrated that our method achieves state-of-the-art performance, notably outperforming the baseline GAMMA by 9.79% and 7.42% with average OA and AA metric on the unseen domains of DGUS, respectively. UADG greatly enhancing the automation of global urban land use mapping.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"230 ","pages":"Pages 92-108"},"PeriodicalIF":12.2000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625003387","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Global urban scene classification is a crucial technology for global land use mapping, holding significant importance in driving urban intelligence forward. When applying datasets constructed from urban scenes on a global scale, there are two serious problems. Due to cultural, economic, and other factors, style differences exist in scenes across different cities, posing challenges for model generalization. Additionally, urban scene samples often follows a long-tailed distribution, complicating the identification of tail categories with small sample volumes and impairing performance under domain generalization settings. To tackle these problems, the Uncertainty-aware Domain Generalization urban scene classification (UADG) framework is constructed. For mitigating city-related style difference among global cities, a city-related whitening is proposed, utilizing whitening operations to separate city unrelated content features and adaptively preserving city-related information hidden in style features, rather than directly removing style information, thus aiding in more robust representations. To tackle the phenomenon of significant accuracy decline in tail classes during domain generalization, estimated uncertainty is utilized to guide the mixture of experts, and reasonable expert assignment is conducted for hard samples to balance the model bias. To evaluate the proposed UADG framework under practical scenario, the Domain Generalized Urban Scene (DGUS) dataset is curated for validation, with a training set comprising 42 classes of samples from 34 provincial capitals in China, and test samples selected from representative cities across six continents. Extensive experiments have demonstrated that our method achieves state-of-the-art performance, notably outperforming the baseline GAMMA by 9.79% and 7.42% with average OA and AA metric on the unseen domains of DGUS, respectively. UADG greatly enhancing the automation of global urban land use mapping.
基于不确定性感知域泛化的全球城市高分辨率场景分类
全球城市场景分类是全球土地利用制图的关键技术,对推动城市智能化发展具有重要意义。当在全球范围内应用城市场景构建的数据集时,存在两个严重的问题。由于文化、经济等因素的影响,不同城市的场景存在风格差异,给模型泛化带来了挑战。此外,城市场景样本通常遵循长尾分布,这使得小样本量尾部类别的识别变得复杂,并且在域泛化设置下降低了性能。为了解决这些问题,构建了不确定性感知域概化城市场景分类框架。为了减轻全球城市之间的城市相关风格差异,提出了一种城市相关白化方法,利用白化操作分离与城市无关的内容特征,自适应地保留隐藏在风格特征中的城市相关信息,而不是直接去除风格信息,从而有助于增强鲁棒性表征。为了解决领域泛化过程中尾类准确率显著下降的现象,利用估计的不确定性来指导专家的混合,并对硬样本进行合理的专家分配以平衡模型偏差。为了在实际场景下评估所提出的UADG框架,我们整理了领域广义城市场景(DGUS)数据集进行验证,该数据集包括来自中国34个省会城市的42类样本,以及来自六大洲代表性城市的测试样本。大量的实验表明,我们的方法达到了最先进的性能,特别是在DGUS的未见域上,平均OA和AA度量分别比基线GAMMA高9.79%和7.42%。UADG极大地提高了全球城市土地利用制图的自动化程度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信