Shao-Yuan Li , Yu-Xiang Zheng , Sheng-Jun Huang , Songcan Chen , Kangkan Wang
{"title":"Prototypes as Anchors: Tackling Unseen Noise for online continual learning","authors":"Shao-Yuan Li , Yu-Xiang Zheng , Sheng-Jun Huang , Songcan Chen , Kangkan Wang","doi":"10.1016/j.neunet.2025.107634","DOIUrl":null,"url":null,"abstract":"<div><div>In the context of online class-incremental continual learning (CIL), adapting to label noise becomes paramount for model success in evolving domains. While some continual learning (CL) methods have begun to address noisy data streams, most assume that the noise strictly belongs to closed-set noise—i.e., they follow the assumption that noise in the current task originates classes within the same task. This assumption is clearly unrealistic in real-world scenarios. In this paper, we first formulate and analyze the concepts of <em>closed-set</em> and <em>open-set</em> noise, showing that both types can introduce <em>unseen classes</em> for the current training classifier. Then, to effectively handle noisy labels and unknown classes, we present an innovative replay-based method Prototypes as Anchors (PAA), which learns representative and discriminative prototypes for each class, and conducts a similarity-based denoising schema in the representation space to distinguish and eliminate the negative impact of unseen classes. By implementing a dual-classifier architecture, PAA conducts consistency checks between the classifiers to ensure robustness. Extensive experimental results on diverse datasets demonstrate a significant improvement in model performance and robustness compared to existing approaches, offering a promising avenue for continual learning in dynamic, real-world environments.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107634"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005143","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In the context of online class-incremental continual learning (CIL), adapting to label noise becomes paramount for model success in evolving domains. While some continual learning (CL) methods have begun to address noisy data streams, most assume that the noise strictly belongs to closed-set noise—i.e., they follow the assumption that noise in the current task originates classes within the same task. This assumption is clearly unrealistic in real-world scenarios. In this paper, we first formulate and analyze the concepts of closed-set and open-set noise, showing that both types can introduce unseen classes for the current training classifier. Then, to effectively handle noisy labels and unknown classes, we present an innovative replay-based method Prototypes as Anchors (PAA), which learns representative and discriminative prototypes for each class, and conducts a similarity-based denoising schema in the representation space to distinguish and eliminate the negative impact of unseen classes. By implementing a dual-classifier architecture, PAA conducts consistency checks between the classifiers to ensure robustness. Extensive experimental results on diverse datasets demonstrate a significant improvement in model performance and robustness compared to existing approaches, offering a promising avenue for continual learning in dynamic, real-world environments.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.