Unsupervised continual learning by cross-level, instance-group and pseudo-group discrimination with hard attention

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Ankit Malviya, Sayak Dhole, Chandresh Kumar Maurya
{"title":"Unsupervised continual learning by cross-level, instance-group and pseudo-group discrimination with hard attention","authors":"Ankit Malviya,&nbsp;Sayak Dhole,&nbsp;Chandresh Kumar Maurya","doi":"10.1016/j.jocs.2025.102535","DOIUrl":null,"url":null,"abstract":"<div><div>Extensive work has been done in supervised continual learning (SCL) , wherein models adapt to changing distributions with labeled data while mitigating catastrophic forgetting. However, this approach diverges from real-world scenarios where labeled data is scarce or non-existent. Unsupervised continual learning (UCL) emerges to bridge this disparity. Previous research has explored methods for unsupervised continuous feature learning by incorporating rehearsal to alleviate the problem of catastrophic forgetting. Although these techniques are effective, they may not be feasible for scenarios where storing training data is impractical. Moreover, rehearsal techniques may confront challenges pertaining to representation drifts and overfitting, particularly under limited buffer size conditions. To address these drawbacks, we employ parameter isolation as a strategy to mitigate forgetting. Specifically, we use task-specific hard attention to prevent updates to parameters important for previous tasks. In contrastive learning, loss is prone to be negatively affected by a reduction in the diversity of negative samples. Therefore, we incorporate instance-to-instance similarity into contrastive learning through both direct instance grouping and discrimination at the cross-level with local instance groups, as well as with local pseudo-instance groups. The masked model learns the features using cross-level discrimination, which naturally clusters similar data in the representation space. Extensive experimentation demonstrates that our proposed approach outperforms current state-of-the-art (SOTA) baselines by significant margins, all while exhibiting minimal or nearly zero forgetting, and without the need for any rehearsal buffer. Additionally, the model learns distinct task boundaries. It achieves an overall-average task and class incremental learning (TIL &amp; CIL) accuracy of 76.79% and 62.96% respectively with nearly zero forgetting, across standard datasets for varying task sequences ranging from 5 to 100. This surpasses SOTA baselines, which only reach 74.28% and 60.68% respectively in the UCL setting, where they experience substantial forgetting of almost over 4%. Moreover, our approach achieves performance nearly comparable to the SCL baseline and even surpasses it on some standard datasets, with a notable reduction in forgetting from almost 14.51% to nearly zero.</div></div>","PeriodicalId":48907,"journal":{"name":"Journal of Computational Science","volume":"86 ","pages":"Article 102535"},"PeriodicalIF":3.1000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877750325000122","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Extensive work has been done in supervised continual learning (SCL) , wherein models adapt to changing distributions with labeled data while mitigating catastrophic forgetting. However, this approach diverges from real-world scenarios where labeled data is scarce or non-existent. Unsupervised continual learning (UCL) emerges to bridge this disparity. Previous research has explored methods for unsupervised continuous feature learning by incorporating rehearsal to alleviate the problem of catastrophic forgetting. Although these techniques are effective, they may not be feasible for scenarios where storing training data is impractical. Moreover, rehearsal techniques may confront challenges pertaining to representation drifts and overfitting, particularly under limited buffer size conditions. To address these drawbacks, we employ parameter isolation as a strategy to mitigate forgetting. Specifically, we use task-specific hard attention to prevent updates to parameters important for previous tasks. In contrastive learning, loss is prone to be negatively affected by a reduction in the diversity of negative samples. Therefore, we incorporate instance-to-instance similarity into contrastive learning through both direct instance grouping and discrimination at the cross-level with local instance groups, as well as with local pseudo-instance groups. The masked model learns the features using cross-level discrimination, which naturally clusters similar data in the representation space. Extensive experimentation demonstrates that our proposed approach outperforms current state-of-the-art (SOTA) baselines by significant margins, all while exhibiting minimal or nearly zero forgetting, and without the need for any rehearsal buffer. Additionally, the model learns distinct task boundaries. It achieves an overall-average task and class incremental learning (TIL & CIL) accuracy of 76.79% and 62.96% respectively with nearly zero forgetting, across standard datasets for varying task sequences ranging from 5 to 100. This surpasses SOTA baselines, which only reach 74.28% and 60.68% respectively in the UCL setting, where they experience substantial forgetting of almost over 4%. Moreover, our approach achieves performance nearly comparable to the SCL baseline and even surpasses it on some standard datasets, with a notable reduction in forgetting from almost 14.51% to nearly zero.

Abstract Image

求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Computational Science
Journal of Computational Science COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-COMPUTER SCIENCE, THEORY & METHODS
CiteScore
5.50
自引率
3.00%
发文量
227
审稿时长
41 days
期刊介绍: Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory. The recent advances in experimental techniques such as detectors, on-line sensor networks and high-resolution imaging techniques, have opened up new windows into physical and biological processes at many levels of detail. The resulting data explosion allows for detailed data driven modeling and simulation. This new discipline in science combines computational thinking, modern computational methods, devices and collateral technologies to address problems far beyond the scope of traditional numerical methods. Computational science typically unifies three distinct elements: • Modeling, Algorithms and Simulations (e.g. numerical and non-numerical, discrete and continuous); • Software developed to solve science (e.g., biological, physical, and social), engineering, medicine, and humanities problems; • Computer and information science that develops and optimizes the advanced system hardware, software, networking, and data management components (e.g. problem solving environments).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信