{"title":"Unsupervised continual learning by cross-level, instance-group and pseudo-group discrimination with hard attention","authors":"Ankit Malviya, Sayak Dhole, Chandresh Kumar Maurya","doi":"10.1016/j.jocs.2025.102535","DOIUrl":null,"url":null,"abstract":"<div><div>Extensive work has been done in supervised continual learning (SCL) , wherein models adapt to changing distributions with labeled data while mitigating catastrophic forgetting. However, this approach diverges from real-world scenarios where labeled data is scarce or non-existent. Unsupervised continual learning (UCL) emerges to bridge this disparity. Previous research has explored methods for unsupervised continuous feature learning by incorporating rehearsal to alleviate the problem of catastrophic forgetting. Although these techniques are effective, they may not be feasible for scenarios where storing training data is impractical. Moreover, rehearsal techniques may confront challenges pertaining to representation drifts and overfitting, particularly under limited buffer size conditions. To address these drawbacks, we employ parameter isolation as a strategy to mitigate forgetting. Specifically, we use task-specific hard attention to prevent updates to parameters important for previous tasks. In contrastive learning, loss is prone to be negatively affected by a reduction in the diversity of negative samples. Therefore, we incorporate instance-to-instance similarity into contrastive learning through both direct instance grouping and discrimination at the cross-level with local instance groups, as well as with local pseudo-instance groups. The masked model learns the features using cross-level discrimination, which naturally clusters similar data in the representation space. Extensive experimentation demonstrates that our proposed approach outperforms current state-of-the-art (SOTA) baselines by significant margins, all while exhibiting minimal or nearly zero forgetting, and without the need for any rehearsal buffer. Additionally, the model learns distinct task boundaries. It achieves an overall-average task and class incremental learning (TIL & CIL) accuracy of 76.79% and 62.96% respectively with nearly zero forgetting, across standard datasets for varying task sequences ranging from 5 to 100. This surpasses SOTA baselines, which only reach 74.28% and 60.68% respectively in the UCL setting, where they experience substantial forgetting of almost over 4%. Moreover, our approach achieves performance nearly comparable to the SCL baseline and even surpasses it on some standard datasets, with a notable reduction in forgetting from almost 14.51% to nearly zero.</div></div>","PeriodicalId":48907,"journal":{"name":"Journal of Computational Science","volume":"86 ","pages":"Article 102535"},"PeriodicalIF":3.1000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877750325000122","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Extensive work has been done in supervised continual learning (SCL) , wherein models adapt to changing distributions with labeled data while mitigating catastrophic forgetting. However, this approach diverges from real-world scenarios where labeled data is scarce or non-existent. Unsupervised continual learning (UCL) emerges to bridge this disparity. Previous research has explored methods for unsupervised continuous feature learning by incorporating rehearsal to alleviate the problem of catastrophic forgetting. Although these techniques are effective, they may not be feasible for scenarios where storing training data is impractical. Moreover, rehearsal techniques may confront challenges pertaining to representation drifts and overfitting, particularly under limited buffer size conditions. To address these drawbacks, we employ parameter isolation as a strategy to mitigate forgetting. Specifically, we use task-specific hard attention to prevent updates to parameters important for previous tasks. In contrastive learning, loss is prone to be negatively affected by a reduction in the diversity of negative samples. Therefore, we incorporate instance-to-instance similarity into contrastive learning through both direct instance grouping and discrimination at the cross-level with local instance groups, as well as with local pseudo-instance groups. The masked model learns the features using cross-level discrimination, which naturally clusters similar data in the representation space. Extensive experimentation demonstrates that our proposed approach outperforms current state-of-the-art (SOTA) baselines by significant margins, all while exhibiting minimal or nearly zero forgetting, and without the need for any rehearsal buffer. Additionally, the model learns distinct task boundaries. It achieves an overall-average task and class incremental learning (TIL & CIL) accuracy of 76.79% and 62.96% respectively with nearly zero forgetting, across standard datasets for varying task sequences ranging from 5 to 100. This surpasses SOTA baselines, which only reach 74.28% and 60.68% respectively in the UCL setting, where they experience substantial forgetting of almost over 4%. Moreover, our approach achieves performance nearly comparable to the SCL baseline and even surpasses it on some standard datasets, with a notable reduction in forgetting from almost 14.51% to nearly zero.
期刊介绍:
Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory.
The recent advances in experimental techniques such as detectors, on-line sensor networks and high-resolution imaging techniques, have opened up new windows into physical and biological processes at many levels of detail. The resulting data explosion allows for detailed data driven modeling and simulation.
This new discipline in science combines computational thinking, modern computational methods, devices and collateral technologies to address problems far beyond the scope of traditional numerical methods.
Computational science typically unifies three distinct elements:
• Modeling, Algorithms and Simulations (e.g. numerical and non-numerical, discrete and continuous);
• Software developed to solve science (e.g., biological, physical, and social), engineering, medicine, and humanities problems;
• Computer and information science that develops and optimizes the advanced system hardware, software, networking, and data management components (e.g. problem solving environments).