DuPt: Rehearsal-based continual learning with dual prompts

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shengqin Jiang , Daolong Zhang , Fengna Cheng , Xiaobo Lu , Qingshan Liu
{"title":"DuPt: Rehearsal-based continual learning with dual prompts","authors":"Shengqin Jiang ,&nbsp;Daolong Zhang ,&nbsp;Fengna Cheng ,&nbsp;Xiaobo Lu ,&nbsp;Qingshan Liu","doi":"10.1016/j.neunet.2025.107306","DOIUrl":null,"url":null,"abstract":"<div><div>The rehearsal-based continual learning methods usually involve reviewing a small number of representative samples to enable the network to learn new contents while retaining old knowledge. However, existing works overlook two crucial factors: (1) While the network prioritizes learning new data at incremental stages, it exhibits weaker generalization capabilities when trained individually on limited samples from specific categories, in contrast to training on large-scale samples across multiple categories simultaneously. (2) Knowledge distillation of a limited set of old samples can transfer certain existing knowledge, but imposing strong constraints may hinder knowledge transfer and restrict the ability of the network from the current stage to capture fresh knowledge. To alleviate these issues, we propose a rehearsal-based continual learning method with dual prompts, termed DuPt. First, we propose an input-aware prompt, an input-level cue that utilizes an input prior to querying for valid cue information. These hints serve as an additional complement to help the input samples generate more rational and diverse distributions. Second, we introduce a proxy feature prompt, a feature-level hint that bridges the knowledge gap between the teacher and student models to maintain consistency in the feature transfer process, reinforcing feature plasticity and stability. This is because differences in network features between the new and old incremental stages could affect the generalization of their new models if strictly aligned. Our proposed prompt can act as a consistency regularization to avoid feature conflicts caused by the differences between network features. Extensive experiments validate the effectiveness of our method, which can seamlessly integrate with existing methods, leading to performance improvements.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"187 ","pages":"Article 107306"},"PeriodicalIF":6.3000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025001856","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The rehearsal-based continual learning methods usually involve reviewing a small number of representative samples to enable the network to learn new contents while retaining old knowledge. However, existing works overlook two crucial factors: (1) While the network prioritizes learning new data at incremental stages, it exhibits weaker generalization capabilities when trained individually on limited samples from specific categories, in contrast to training on large-scale samples across multiple categories simultaneously. (2) Knowledge distillation of a limited set of old samples can transfer certain existing knowledge, but imposing strong constraints may hinder knowledge transfer and restrict the ability of the network from the current stage to capture fresh knowledge. To alleviate these issues, we propose a rehearsal-based continual learning method with dual prompts, termed DuPt. First, we propose an input-aware prompt, an input-level cue that utilizes an input prior to querying for valid cue information. These hints serve as an additional complement to help the input samples generate more rational and diverse distributions. Second, we introduce a proxy feature prompt, a feature-level hint that bridges the knowledge gap between the teacher and student models to maintain consistency in the feature transfer process, reinforcing feature plasticity and stability. This is because differences in network features between the new and old incremental stages could affect the generalization of their new models if strictly aligned. Our proposed prompt can act as a consistency regularization to avoid feature conflicts caused by the differences between network features. Extensive experiments validate the effectiveness of our method, which can seamlessly integrate with existing methods, leading to performance improvements.
DuPt:基于排练的持续学习与双重提示
基于演练的持续学习方法通常涉及审查少量有代表性的样本,以使网络在学习新内容的同时保留旧知识。然而,现有研究忽略了两个关键因素:(1) 虽然网络在增量阶段优先学习新数据,但在对特定类别的有限样本进行单独训练时,网络会表现出较弱的泛化能力,这与同时对多个类别的大规模样本进行训练形成鲜明对比。(2) 对有限的旧样本进行知识提炼可以转移某些已有知识,但强加约束可能会阻碍知识转移,限制网络从当前阶段获取新知识的能力。为了缓解这些问题,我们提出了一种基于演练的双提示持续学习方法,称为 DuPt。首先,我们提出了一种输入感知提示,即在查询有效提示信息之前利用输入的输入级提示。这些提示可作为额外的补充,帮助输入样本生成更合理、更多样化的分布。其次,我们引入了代理特征提示,这种特征级提示可以弥补教师模型和学生模型之间的知识差距,保持特征转移过程的一致性,加强特征的可塑性和稳定性。这是因为新旧增量阶段之间网络特征的差异如果严格保持一致,可能会影响其新模型的泛化。我们提出的提示可以起到一致性正则化的作用,避免网络特征之间的差异造成的特征冲突。广泛的实验验证了我们方法的有效性,它可以与现有方法无缝集成,从而提高性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信