{"title":"Un-CNL:基于不确定性的连续噪声学习框架","authors":"Guangrui Guo , Jinyong Cheng","doi":"10.1016/j.neucom.2025.130985","DOIUrl":null,"url":null,"abstract":"<div><div>The goal of continual learning is to maintain model performance while adapting to new tasks and evolving data environments. This helps address catastrophic forgetting, a common issue in deep learning. However, challenges like human annotation errors and label biases introduce noisy labels into datasets, further intensifying catastrophic forgetting in neural networks. In response to these challenges, the concept of continual noisy learning (CNL) has emerged. While existing methods often rely on sample selection and replay strategies, they tend to focus solely on sample confidence, neglecting representativeness. To improve the reliability and representativeness of replayed samples, we propose a novel method called Un-CNL. This approach uses uncertainty purification techniques based on perturbed samples to separate data streams and select reliable samples for replay. Additionally, we apply CutMix data augmentation to enhance the representativeness of these samples. Subsequently, semi-supervised learning is employed for fine-tuning, combined with contrastive learning to handle the classification challenges posed by noisy data streams. We validated the effectiveness of Un-CNL through experiments on CIFAR-10 and CIFAR-100 datasets, demonstrating its superior performance compared to existing methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130985"},"PeriodicalIF":6.5000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Un-CNL: An uncertainty-based continual noisy learning framework\",\"authors\":\"Guangrui Guo , Jinyong Cheng\",\"doi\":\"10.1016/j.neucom.2025.130985\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The goal of continual learning is to maintain model performance while adapting to new tasks and evolving data environments. This helps address catastrophic forgetting, a common issue in deep learning. However, challenges like human annotation errors and label biases introduce noisy labels into datasets, further intensifying catastrophic forgetting in neural networks. In response to these challenges, the concept of continual noisy learning (CNL) has emerged. While existing methods often rely on sample selection and replay strategies, they tend to focus solely on sample confidence, neglecting representativeness. To improve the reliability and representativeness of replayed samples, we propose a novel method called Un-CNL. This approach uses uncertainty purification techniques based on perturbed samples to separate data streams and select reliable samples for replay. Additionally, we apply CutMix data augmentation to enhance the representativeness of these samples. Subsequently, semi-supervised learning is employed for fine-tuning, combined with contrastive learning to handle the classification challenges posed by noisy data streams. We validated the effectiveness of Un-CNL through experiments on CIFAR-10 and CIFAR-100 datasets, demonstrating its superior performance compared to existing methods.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"651 \",\"pages\":\"Article 130985\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225016571\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225016571","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Un-CNL: An uncertainty-based continual noisy learning framework
The goal of continual learning is to maintain model performance while adapting to new tasks and evolving data environments. This helps address catastrophic forgetting, a common issue in deep learning. However, challenges like human annotation errors and label biases introduce noisy labels into datasets, further intensifying catastrophic forgetting in neural networks. In response to these challenges, the concept of continual noisy learning (CNL) has emerged. While existing methods often rely on sample selection and replay strategies, they tend to focus solely on sample confidence, neglecting representativeness. To improve the reliability and representativeness of replayed samples, we propose a novel method called Un-CNL. This approach uses uncertainty purification techniques based on perturbed samples to separate data streams and select reliable samples for replay. Additionally, we apply CutMix data augmentation to enhance the representativeness of these samples. Subsequently, semi-supervised learning is employed for fine-tuning, combined with contrastive learning to handle the classification challenges posed by noisy data streams. We validated the effectiveness of Un-CNL through experiments on CIFAR-10 and CIFAR-100 datasets, demonstrating its superior performance compared to existing methods.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.