Preserve Knowledge with Auxiliary Feature Extractor for Class Incremental Learning

Huihui Jie, Yuesheng Zhu
{"title":"Preserve Knowledge with Auxiliary Feature Extractor for Class Incremental Learning","authors":"Huihui Jie, Yuesheng Zhu","doi":"10.1145/3561613.3561615","DOIUrl":null,"url":null,"abstract":"Class incremental learning (CIL) aims to achieve the ability to learn knowledge from the data of novel classes that arrive incrementally. To this end, the exemplar-based method stores a small number of samples of old classes and has been proven to be effective yet it causes the severe data imbalance issue. An approach named SS-IL solves the issue effectively and achieves strong state-of-the-art on large-scale CIL benchmark datasets while behaving badly on small ones. In this paper, we observe that the poor performance of SS-IL on small datasets could stem from not fully stimulating the potentiality of the learned representation of old classes, especially the initial classes. We propose an auxiliary Weight Scaling Feature Extractor (aWSFE) to better maintain and exploit the essential semantics of old classes. This auxiliary extractor is used as a plug-in module with the main classification network based on SS-IL in parallel. We perform a special design for the two branches so that the feature vectors from the main and auxiliary extractor can be integrated easily without an additional aggregation process. After obtaining the updated representations, we finetuning the classifier based on a balanced subset of training data to further promote performance. We conduct extensive experiments on two small-scale CIL benchmark datasets: CIFAR-100 and ImageNet-Sub. Results show that the proposed method effectively alleviates the forgetting of old knowledge and significantly improves the performance of SS-IL on small datasets.","PeriodicalId":348024,"journal":{"name":"Proceedings of the 5th International Conference on Control and Computer Vision","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Control and Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3561613.3561615","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Class incremental learning (CIL) aims to achieve the ability to learn knowledge from the data of novel classes that arrive incrementally. To this end, the exemplar-based method stores a small number of samples of old classes and has been proven to be effective yet it causes the severe data imbalance issue. An approach named SS-IL solves the issue effectively and achieves strong state-of-the-art on large-scale CIL benchmark datasets while behaving badly on small ones. In this paper, we observe that the poor performance of SS-IL on small datasets could stem from not fully stimulating the potentiality of the learned representation of old classes, especially the initial classes. We propose an auxiliary Weight Scaling Feature Extractor (aWSFE) to better maintain and exploit the essential semantics of old classes. This auxiliary extractor is used as a plug-in module with the main classification network based on SS-IL in parallel. We perform a special design for the two branches so that the feature vectors from the main and auxiliary extractor can be integrated easily without an additional aggregation process. After obtaining the updated representations, we finetuning the classifier based on a balanced subset of training data to further promote performance. We conduct extensive experiments on two small-scale CIL benchmark datasets: CIFAR-100 and ImageNet-Sub. Results show that the proposed method effectively alleviates the forgetting of old knowledge and significantly improves the performance of SS-IL on small datasets.
基于辅助特征提取器的类增量学习知识保存
类增量学习(Class incremental learning, CIL)旨在实现从增量到达的新类数据中学习知识的能力。为此,基于样本的方法存储了少量旧类的样本,已被证明是有效的,但也造成了严重的数据不平衡问题。一种名为SS-IL的方法有效地解决了这个问题,并在大规模CIL基准数据集上实现了强大的最新技术,而在小型基准数据集上表现不佳。在本文中,我们观察到SS-IL在小数据集上的不良性能可能源于没有充分激发旧类,特别是初始类的学习表征的潜力。我们提出了一个辅助的权重缩放特征提取器(aWSFE)来更好地维护和利用旧类的基本语义。该辅助提取器作为插件模块与基于SS-IL的主分类网络并行使用。我们对两个分支进行了特殊的设计,使得主提取器和辅助提取器的特征向量可以很容易地集成,而不需要额外的聚合过程。在获得更新的表示后,我们基于平衡的训练数据子集对分类器进行微调,以进一步提高性能。我们在两个小规模的CIL基准数据集:CIFAR-100和ImageNet-Sub上进行了广泛的实验。结果表明,该方法有效地缓解了旧知识的遗忘,在小数据集上显著提高了SS-IL的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信