Zhihao Zhou , Rui Li , Wenjie Ai , Xueying Li , Zhu Teng , Baopeng Zhang , Junwei Du
{"title":"带噪声标签学习的亲和力感知不确定性量化","authors":"Zhihao Zhou , Rui Li , Wenjie Ai , Xueying Li , Zhu Teng , Baopeng Zhang , Junwei Du","doi":"10.1016/j.patcog.2025.112495","DOIUrl":null,"url":null,"abstract":"<div><div>Training deep neural networks (DNNs) with noisy labels is a challenging task that significantly degenerates the model’s performance. Most existing methods mitigate this problem by identifying and eliminating noisy samples or correcting their labels according to statistical properties like confidence values. However, these methods often overlook the impact of inherent noise, such as sample quality, which can mislead DNNs to focus on incorrect regions, adulterate the softmax classifier, and generate low-quality pseudo-labels. In this paper, we propose a novel Affinity-aware Uncertainty Quantification (AUQ) framework to explore the perception ambiguity and rectify the salient bias by quantifying the uncertainty. Concretely, we construct the dynamic prototypes to represent intra-class semantic spaces and estimate the uncertainty based on sample-prototype pairs, where the observed affinities between sample-prototype pairs are converted to probabilistic representations as the estimated uncertainty. Samples with higher uncertainty are likely to be hard samples and we design an uncertainty-aware loss to emphasize the learning from those samples with high uncertainty, which helps DNNs to gradually concentrate on the critical regions. Besides, we further utilize sample-prototype affinities to adaptively refine pseudo-labels, enhancing the quality of supervisory signals for noisy samples. Extensive experiments conducted on the CIFAR-10, CIFAR-100 and Clothing1M datasets demonstrate the efficacy and effectiveness of AUQ. Notably, we achieve an average performance gain of 0.4 % on CIFAR-10 and a substantial average improvement of 2.3 % over the second-best method on the more challenging CIFAR-100 dataset. Moreover, there is a 0.6 % improvement over the sub-optimal method on Clothing1M. These results validate AUQ’s capability in enhancing DNN robustness against noisy labels.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112495"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Affinity-aware uncertainty quantification for learning with noisy labels\",\"authors\":\"Zhihao Zhou , Rui Li , Wenjie Ai , Xueying Li , Zhu Teng , Baopeng Zhang , Junwei Du\",\"doi\":\"10.1016/j.patcog.2025.112495\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Training deep neural networks (DNNs) with noisy labels is a challenging task that significantly degenerates the model’s performance. Most existing methods mitigate this problem by identifying and eliminating noisy samples or correcting their labels according to statistical properties like confidence values. However, these methods often overlook the impact of inherent noise, such as sample quality, which can mislead DNNs to focus on incorrect regions, adulterate the softmax classifier, and generate low-quality pseudo-labels. In this paper, we propose a novel Affinity-aware Uncertainty Quantification (AUQ) framework to explore the perception ambiguity and rectify the salient bias by quantifying the uncertainty. Concretely, we construct the dynamic prototypes to represent intra-class semantic spaces and estimate the uncertainty based on sample-prototype pairs, where the observed affinities between sample-prototype pairs are converted to probabilistic representations as the estimated uncertainty. Samples with higher uncertainty are likely to be hard samples and we design an uncertainty-aware loss to emphasize the learning from those samples with high uncertainty, which helps DNNs to gradually concentrate on the critical regions. Besides, we further utilize sample-prototype affinities to adaptively refine pseudo-labels, enhancing the quality of supervisory signals for noisy samples. Extensive experiments conducted on the CIFAR-10, CIFAR-100 and Clothing1M datasets demonstrate the efficacy and effectiveness of AUQ. Notably, we achieve an average performance gain of 0.4 % on CIFAR-10 and a substantial average improvement of 2.3 % over the second-best method on the more challenging CIFAR-100 dataset. Moreover, there is a 0.6 % improvement over the sub-optimal method on Clothing1M. These results validate AUQ’s capability in enhancing DNN robustness against noisy labels.</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"172 \",\"pages\":\"Article 112495\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0031320325011586\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325011586","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Affinity-aware uncertainty quantification for learning with noisy labels
Training deep neural networks (DNNs) with noisy labels is a challenging task that significantly degenerates the model’s performance. Most existing methods mitigate this problem by identifying and eliminating noisy samples or correcting their labels according to statistical properties like confidence values. However, these methods often overlook the impact of inherent noise, such as sample quality, which can mislead DNNs to focus on incorrect regions, adulterate the softmax classifier, and generate low-quality pseudo-labels. In this paper, we propose a novel Affinity-aware Uncertainty Quantification (AUQ) framework to explore the perception ambiguity and rectify the salient bias by quantifying the uncertainty. Concretely, we construct the dynamic prototypes to represent intra-class semantic spaces and estimate the uncertainty based on sample-prototype pairs, where the observed affinities between sample-prototype pairs are converted to probabilistic representations as the estimated uncertainty. Samples with higher uncertainty are likely to be hard samples and we design an uncertainty-aware loss to emphasize the learning from those samples with high uncertainty, which helps DNNs to gradually concentrate on the critical regions. Besides, we further utilize sample-prototype affinities to adaptively refine pseudo-labels, enhancing the quality of supervisory signals for noisy samples. Extensive experiments conducted on the CIFAR-10, CIFAR-100 and Clothing1M datasets demonstrate the efficacy and effectiveness of AUQ. Notably, we achieve an average performance gain of 0.4 % on CIFAR-10 and a substantial average improvement of 2.3 % over the second-best method on the more challenging CIFAR-100 dataset. Moreover, there is a 0.6 % improvement over the sub-optimal method on Clothing1M. These results validate AUQ’s capability in enhancing DNN robustness against noisy labels.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.