pass++:非范例类增量学习的双偏差减少框架

IF 18.6
Fei Zhu;Xu-Yao Zhang;Zhen Cheng;Cheng-Lin Liu
{"title":"pass++:非范例类增量学习的双偏差减少框架","authors":"Fei Zhu;Xu-Yao Zhang;Zhen Cheng;Cheng-Lin Liu","doi":"10.1109/TPAMI.2025.3568886","DOIUrl":null,"url":null,"abstract":"Class-incremental learning (CIL) aims to continually recognize new classes while preserving the discriminability of previously learned ones. Most existing CIL methods are exemplar-based, relying on the storage and replay of a subset of old data during training. Without access to such data, these methods typically suffer from catastrophic forgetting. In this paper, we identify two fundamental causes of forgetting in CIL: representation bias and classifier bias. To address these challenges, we propose a simple yet effective dual-bias reduction framework, which leverages self-supervised transformation (SST) in the input space and prototype augmentation (protoAug) in the feature space. On one hand, SST mitigates representation bias by encouraging the model to learn generic, diverse representations that generalize across tasks. On the other hand, protoAug tackles classifier bias by explicitly or implicitly augmenting the prototypes of old classes in the feature space, thereby imposing stronger constraints to preserve decision boundaries. We further enhance the framework with hardness-aware prototype augmentation and multi-view ensemble strategies, yielding significant performance gains. The proposed framework can be easily integrated with pre-trained models. Without storing any samples of old classes, our method performs comparably to state-of-the-art exemplar-based approaches that rely on extensive data storage. We hope to draw the attention of researchers back to non-exemplar CIL by rethinking the necessity of storing old samples.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 8","pages":"7123-7139"},"PeriodicalIF":18.6000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning\",\"authors\":\"Fei Zhu;Xu-Yao Zhang;Zhen Cheng;Cheng-Lin Liu\",\"doi\":\"10.1109/TPAMI.2025.3568886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Class-incremental learning (CIL) aims to continually recognize new classes while preserving the discriminability of previously learned ones. Most existing CIL methods are exemplar-based, relying on the storage and replay of a subset of old data during training. Without access to such data, these methods typically suffer from catastrophic forgetting. In this paper, we identify two fundamental causes of forgetting in CIL: representation bias and classifier bias. To address these challenges, we propose a simple yet effective dual-bias reduction framework, which leverages self-supervised transformation (SST) in the input space and prototype augmentation (protoAug) in the feature space. On one hand, SST mitigates representation bias by encouraging the model to learn generic, diverse representations that generalize across tasks. On the other hand, protoAug tackles classifier bias by explicitly or implicitly augmenting the prototypes of old classes in the feature space, thereby imposing stronger constraints to preserve decision boundaries. We further enhance the framework with hardness-aware prototype augmentation and multi-view ensemble strategies, yielding significant performance gains. The proposed framework can be easily integrated with pre-trained models. Without storing any samples of old classes, our method performs comparably to state-of-the-art exemplar-based approaches that rely on extensive data storage. We hope to draw the attention of researchers back to non-exemplar CIL by rethinking the necessity of storing old samples.\",\"PeriodicalId\":94034,\"journal\":{\"name\":\"IEEE transactions on pattern analysis and machine intelligence\",\"volume\":\"47 8\",\"pages\":\"7123-7139\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on pattern analysis and machine intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10999068/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10999068/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

类增量学习(Class-incremental learning, CIL)旨在不断地识别新的类,同时保留以前学习过的类的可辨别性。大多数现有的CIL方法都是基于示例的,依赖于训练期间旧数据子集的存储和重播。如果无法访问这些数据,这些方法通常会遭受灾难性的遗忘。在本文中,我们确定了在CIL中遗忘的两个基本原因:表征偏差和分类器偏差。为了解决这些挑战,我们提出了一个简单而有效的双偏约简框架,它利用了输入空间的自监督变换(SST)和特征空间的原型增强(protoAug)。一方面,SST通过鼓励模型学习跨任务的通用的、多样化的表示来减轻表征偏差。另一方面,protoAug通过显式或隐式地增加特征空间中旧类的原型来解决分类器偏差,从而施加更强的约束来保持决策边界。我们通过硬度感知原型增强和多视图集成策略进一步增强了框架,从而获得了显着的性能提升。提出的框架可以很容易地与预训练模型集成。在不存储任何旧类的样本的情况下,我们的方法的性能与依赖大量数据存储的最先进的基于示例的方法相当。我们希望通过重新思考储存旧样本的必要性,将研究人员的注意力吸引回非范例CIL。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning
Class-incremental learning (CIL) aims to continually recognize new classes while preserving the discriminability of previously learned ones. Most existing CIL methods are exemplar-based, relying on the storage and replay of a subset of old data during training. Without access to such data, these methods typically suffer from catastrophic forgetting. In this paper, we identify two fundamental causes of forgetting in CIL: representation bias and classifier bias. To address these challenges, we propose a simple yet effective dual-bias reduction framework, which leverages self-supervised transformation (SST) in the input space and prototype augmentation (protoAug) in the feature space. On one hand, SST mitigates representation bias by encouraging the model to learn generic, diverse representations that generalize across tasks. On the other hand, protoAug tackles classifier bias by explicitly or implicitly augmenting the prototypes of old classes in the feature space, thereby imposing stronger constraints to preserve decision boundaries. We further enhance the framework with hardness-aware prototype augmentation and multi-view ensemble strategies, yielding significant performance gains. The proposed framework can be easily integrated with pre-trained models. Without storing any samples of old classes, our method performs comparably to state-of-the-art exemplar-based approaches that rely on extensive data storage. We hope to draw the attention of researchers back to non-exemplar CIL by rethinking the necessity of storing old samples.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信