Class-Incremental Learning: A Survey

Da-Wei Zhou;Qi-Wei Wang;Zhi-Hong Qi;Han-Jia Ye;De-Chuan Zhan;Ziwei Liu
{"title":"Class-Incremental Learning: A Survey","authors":"Da-Wei Zhou;Qi-Wei Wang;Zhi-Hong Qi;Han-Jia Ye;De-Chuan Zhan;Ziwei Liu","doi":"10.1109/TPAMI.2024.3429383","DOIUrl":null,"url":null,"abstract":"Deep models, e.g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in our ever-changing world, requiring a learning system to acquire new knowledge continually. Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally and build a universal classifier among all seen classes. Correspondingly, when directly training the model with new class instances, a fatal problem occurs — the model tends to \n<italic>catastrophically forget</i>\n the characteristics of former ones, and its performance drastically degrades. There have been numerous efforts to tackle catastrophic forgetting in the machine learning community. In this paper, we survey comprehensively recent advances in class-incremental learning and summarize these methods from several aspects. We also provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms empirically. Furthermore, we notice that the current comparison protocol ignores the influence of memory budget in model storage, which may result in unfair comparison and biased results. Hence, we advocate fair comparison by aligning the memory budget in evaluation, as well as several memory-agnostic performance measures.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10599804/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep models, e.g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in our ever-changing world, requiring a learning system to acquire new knowledge continually. Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally and build a universal classifier among all seen classes. Correspondingly, when directly training the model with new class instances, a fatal problem occurs — the model tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades. There have been numerous efforts to tackle catastrophic forgetting in the machine learning community. In this paper, we survey comprehensively recent advances in class-incremental learning and summarize these methods from several aspects. We also provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms empirically. Furthermore, we notice that the current comparison protocol ignores the influence of memory budget in model storage, which may result in unfair comparison and biased results. Hence, we advocate fair comparison by aligning the memory budget in evaluation, as well as several memory-agnostic performance measures.
班级强化学习:调查。
深度模型,例如 CNN 和视觉变换器,在封闭世界的许多视觉任务中都取得了令人瞩目的成就。然而,在我们瞬息万变的世界中,新的类别不时出现,这就要求学习系统不断获取新知识。类别递增学习(CIL)使学习者能够逐步吸收新类别的知识,并在所有看到的类别中建立一个通用分类器。与此相对应的是,如果直接用新的类实例来训练模型,就会出现一个致命的问题--模型往往会灾难性地遗忘以前类的特征,其性能也会急剧下降。机器学习界已经为解决灾难性遗忘问题做出了许多努力。在本文中,我们全面考察了类递增学习的最新进展,并从几个方面对这些方法进行了总结。我们还在基准图像分类任务中对 17 种方法进行了严格而统一的评估,从而从经验上找出不同算法的特点。此外,我们还注意到,目前的比较方案忽略了模型存储中内存预算的影响,这可能会导致比较不公平和结果有偏差。因此,我们主张通过在评估中调整内存预算以及几种与内存无关的性能指标来进行公平比较。源代码见 https://github.com/zhoudw-zdw/CIL_Survey/。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信