SeCoV2: Semantic Connectivity-driven Pseudo-Labeling for Robust Cross-Domain Semantic Segmentation.

IF 18.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Dong Zhao,Qi Zang,Nan Pu,Shuang Wang,Nicu Sebe,Zhun Zhong
{"title":"SeCoV2: Semantic Connectivity-driven Pseudo-Labeling for Robust Cross-Domain Semantic Segmentation.","authors":"Dong Zhao,Qi Zang,Nan Pu,Shuang Wang,Nicu Sebe,Zhun Zhong","doi":"10.1109/tpami.2025.3596943","DOIUrl":null,"url":null,"abstract":"Pseudo-labeling is a dominant strategy for cross-domain semantic segmentation (CDSS), yet its effectiveness is limited by fragmented and noisy pixel-level predictions under severe domain shifts. To address this, we propose a semantic connectivity-driven pseudo-labeling framework, SeCo, which constructs and refines pseudo-labels at the connectivity level by aggregating high-confidence pixels into coherent semantic regions. The framework includes two key components: Pixel Semantic Aggregation (PSA), which leverages a dual prompting strategy to preserve category-specific granularity, and Semantic Connectivity Correction with Loss Distribution (SCC-LD), which filters noisy regions based on early-loss statistics. Building upon this foundation, we further present SeCoV2, which introduces SCC-Unc, a novel uncertainty-aware correction module that constructs a connectivity graph and enforces relational consistency for robust refinement in ambiguous regions. SeCoV2 also broadens the applicability of SeCo by extending evaluation to more challenging scenarios, including open-set and multimodal adaptation, semi-supervised domain generalization, and by validating compatibility with different interactive foundation segmentation models such as SAM [1], SEEM [2], and Fast-SAM [3]. Extensive experiments across six CDSS tasks demonstrate that SeCoV2 achieves consistent improvements over previous methods, with an average performance gain of up to +4.6%, establishing new state-of-the-art results. These findings highlight the effectiveness and generalization ability for robust adaptation in diverse real-world environments.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"711 1","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3596943","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Pseudo-labeling is a dominant strategy for cross-domain semantic segmentation (CDSS), yet its effectiveness is limited by fragmented and noisy pixel-level predictions under severe domain shifts. To address this, we propose a semantic connectivity-driven pseudo-labeling framework, SeCo, which constructs and refines pseudo-labels at the connectivity level by aggregating high-confidence pixels into coherent semantic regions. The framework includes two key components: Pixel Semantic Aggregation (PSA), which leverages a dual prompting strategy to preserve category-specific granularity, and Semantic Connectivity Correction with Loss Distribution (SCC-LD), which filters noisy regions based on early-loss statistics. Building upon this foundation, we further present SeCoV2, which introduces SCC-Unc, a novel uncertainty-aware correction module that constructs a connectivity graph and enforces relational consistency for robust refinement in ambiguous regions. SeCoV2 also broadens the applicability of SeCo by extending evaluation to more challenging scenarios, including open-set and multimodal adaptation, semi-supervised domain generalization, and by validating compatibility with different interactive foundation segmentation models such as SAM [1], SEEM [2], and Fast-SAM [3]. Extensive experiments across six CDSS tasks demonstrate that SeCoV2 achieves consistent improvements over previous methods, with an average performance gain of up to +4.6%, establishing new state-of-the-art results. These findings highlight the effectiveness and generalization ability for robust adaptation in diverse real-world environments.
语义连接驱动的伪标记鲁棒跨域语义分割。
伪标注是跨领域语义分割(CDSS)的主流策略,但在严重的领域偏移情况下,其有效性受到碎片化和噪声的像素级预测的限制。为了解决这个问题,我们提出了一个语义连通性驱动的伪标签框架SeCo,它通过将高置信度像素聚合到连贯的语义区域,在连通性级别构建和改进伪标签。该框架包括两个关键组件:像素语义聚合(PSA),它利用双重提示策略来保持特定类别的粒度,以及丢失分布的语义连通性校正(SCC-LD),它根据早期丢失统计过滤噪声区域。在此基础上,我们进一步提出了SeCoV2,它引入了SCC-Unc,这是一种新的不确定性感知校正模块,可以构建连接图并强制关系一致性,以便在模糊区域进行鲁棒细化。SeCoV2还通过将评估扩展到更具挑战性的场景,包括开放集和多模态自适应、半监督域推广,以及验证与不同交互式基础分割模型(如SAM[1]、SEEM[2]和Fast-SAM[3])的兼容性,拓宽了SeCo的适用性。在六个CDSS任务中进行的大量实验表明,SeCoV2比以前的方法取得了一致的改进,平均性能增益高达+4.6%,建立了新的最先进的结果。这些发现强调了在不同现实环境中稳健适应的有效性和泛化能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
28.40
自引率
3.00%
发文量
885
审稿时长
8.5 months
期刊介绍: The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信