Jaemin Na , Heechul Jung , Hyung Jin Chang , Wonjun Hwang
{"title":"Bridging domain spaces for unsupervised domain adaptation","authors":"Jaemin Na , Heechul Jung , Hyung Jin Chang , Wonjun Hwang","doi":"10.1016/j.patcog.2025.111537","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised Domain Adaptation (UDA) aims to transfer knowledge obtained from a labeled source domain to an unlabeled target domain, facing challenges due to domain shift—significant discrepancies in data distribution that impair model performance when applied to unseen domains. While recent approaches have achieved remarkable progress in mitigating these domain shifts, the focus remains on direct adaptation strategies from source to target domains. However, when the gap between the source and target domains is too substantial, directly aligning their distributions becomes increasingly difficult. Pseudo-labeling, a common strategy in direct adaptation, can further exacerbate this issue when the domain shift is severe. In such cases, incorrect pseudo-labels are likely to propagate through the adaptation process, leading to degraded performance and unstable training. Effective adaptation thus requires methods that can address these challenges by improving the reliability of pseudo-labels or reducing dependency on them. To address this challenge, we propose a novel approach that effectively alleviates domain shift by leveraging intermediate domains as bridges between the source and target domains. Specifically, we introduce a fixed ratio-based mixup to generate distinct intermediate domains between the source and target domains. By training on these augmented domains, we construct source-dominant and target-dominant models that possess distinct strengths and weaknesses, enabling us to implement effective complementary learning strategies. Furthermore, we enhance our fixed ratio-based mixup with uncertainty-aware learning, which addresses not only the image-level space but also the feature space, aiming to reduce the uncertainty at the most critical points within these spaces. Finally, we integrate confidence-based learning strategies, including bidirectional matching with high-confidence predictions and self-penalization with low-confidence predictions. Our extensive experiments on seven public benchmarks, including both single-source and multi-source scenarios, demonstrate the effectiveness of our method in UDA tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"164 ","pages":"Article 111537"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325001979","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised Domain Adaptation (UDA) aims to transfer knowledge obtained from a labeled source domain to an unlabeled target domain, facing challenges due to domain shift—significant discrepancies in data distribution that impair model performance when applied to unseen domains. While recent approaches have achieved remarkable progress in mitigating these domain shifts, the focus remains on direct adaptation strategies from source to target domains. However, when the gap between the source and target domains is too substantial, directly aligning their distributions becomes increasingly difficult. Pseudo-labeling, a common strategy in direct adaptation, can further exacerbate this issue when the domain shift is severe. In such cases, incorrect pseudo-labels are likely to propagate through the adaptation process, leading to degraded performance and unstable training. Effective adaptation thus requires methods that can address these challenges by improving the reliability of pseudo-labels or reducing dependency on them. To address this challenge, we propose a novel approach that effectively alleviates domain shift by leveraging intermediate domains as bridges between the source and target domains. Specifically, we introduce a fixed ratio-based mixup to generate distinct intermediate domains between the source and target domains. By training on these augmented domains, we construct source-dominant and target-dominant models that possess distinct strengths and weaknesses, enabling us to implement effective complementary learning strategies. Furthermore, we enhance our fixed ratio-based mixup with uncertainty-aware learning, which addresses not only the image-level space but also the feature space, aiming to reduce the uncertainty at the most critical points within these spaces. Finally, we integrate confidence-based learning strategies, including bidirectional matching with high-confidence predictions and self-penalization with low-confidence predictions. Our extensive experiments on seven public benchmarks, including both single-source and multi-source scenarios, demonstrate the effectiveness of our method in UDA tasks.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.