Zhongying Deng , Da Li , Junjun He , Xiaojiang Peng , Yi-Zhe Song , Tao Xiang
{"title":"Generative models for noise-robust training in unsupervised domain adaptation","authors":"Zhongying Deng , Da Li , Junjun He , Xiaojiang Peng , Yi-Zhe Song , Tao Xiang","doi":"10.1016/j.patcog.2025.112450","DOIUrl":null,"url":null,"abstract":"<div><div>Recent unsupervised domain adaptation (UDA) methods show the effectiveness of pseudo-labels for unlabeled target domain. However, pseudo-labels inevitably contain noise, which can degrade adaptation performance. This paper thus propose a Generative models for Noise-Robust Training (GeNRT), a method designed to mitigate label noise while reducing domain shift. The key idea is leveraging the class-wise distributions of the target domain, modeled by generative models, provide more reliable pseudo-labels than individual pseudo-labeled instances. This is because the distributions statistically better represent class-wise information than a single instance. Based on this observation, GeNRT incorporates a Distribution-based Class-wise Feature Augmentation (D-CFA), which enhances feature representations by sampling features from target class distributions modeled by generative models. These augmented features serve a dual purpose: (1) providing class-level knowledge from generative models to train a noise-robust discriminative classifier, and (2) acting as intermediate features to bridge the domain gap at the class level. Furthermore, GeNRT leverages Generative and Discriminative Consistency (GDC), enforcing consistency regularization between a generative classifier (formed by all class-wise generative models) and the learned discriminative classifier. By aggregating knowledge across target class distributions, GeNRT improves pseudo-label reliability and enhances robustness against label noise. Extensive experiments on Office-Home, VisDA-2017, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods under both single-source and multi-source UDA settings.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112450"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325011124","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recent unsupervised domain adaptation (UDA) methods show the effectiveness of pseudo-labels for unlabeled target domain. However, pseudo-labels inevitably contain noise, which can degrade adaptation performance. This paper thus propose a Generative models for Noise-Robust Training (GeNRT), a method designed to mitigate label noise while reducing domain shift. The key idea is leveraging the class-wise distributions of the target domain, modeled by generative models, provide more reliable pseudo-labels than individual pseudo-labeled instances. This is because the distributions statistically better represent class-wise information than a single instance. Based on this observation, GeNRT incorporates a Distribution-based Class-wise Feature Augmentation (D-CFA), which enhances feature representations by sampling features from target class distributions modeled by generative models. These augmented features serve a dual purpose: (1) providing class-level knowledge from generative models to train a noise-robust discriminative classifier, and (2) acting as intermediate features to bridge the domain gap at the class level. Furthermore, GeNRT leverages Generative and Discriminative Consistency (GDC), enforcing consistency regularization between a generative classifier (formed by all class-wise generative models) and the learned discriminative classifier. By aggregating knowledge across target class distributions, GeNRT improves pseudo-label reliability and enhances robustness against label noise. Extensive experiments on Office-Home, VisDA-2017, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods under both single-source and multi-source UDA settings.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.