{"title":"Continual Learning by Contrastive Learning of Regularized Classes in Multivariate Gaussian Distributions.","authors":"Hyung-Jun Moon, Sung-Bae Cho","doi":"10.1142/S012906572550025X","DOIUrl":null,"url":null,"abstract":"<p><p>Deep neural networks struggle with incremental updates due to catastrophic forgetting, where newly acquired knowledge interferes with the learned previously. Continual learning (CL) methods aim to overcome this limitation by effectively updating the model without losing previous knowledge, but they find it difficult to continuously maintain knowledge about previous tasks, resulting from overlapping stored information. In this paper, we propose a CL method that preserves previous knowledge as multivariate Gaussian distributions by independently storing the model's outputs per class and continually reproducing them for future tasks. We enhance the discriminability between classes and ensure the plasticity for future tasks by exploiting contrastive learning and representation regularization. The class-wise spatial means and covariances, distinguished in the latent space, are stored in memory, where the previous knowledge is effectively preserved and reproduced for incremental tasks. Extensive experiments on benchmark datasets such as CIFAR-10, CIFAR-100, and ImageNet-100 demonstrate that the proposed method achieves accuracies of 93.21%, 77.57%, and 78.15%, respectively, outperforming state-of-the-art CL methods by 2.34 %p, 2.1 %p, and 1.91 %p. Additionally, it achieves the lowest mean forgetting rates across all datasets.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550025"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of neural systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S012906572550025X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks struggle with incremental updates due to catastrophic forgetting, where newly acquired knowledge interferes with the learned previously. Continual learning (CL) methods aim to overcome this limitation by effectively updating the model without losing previous knowledge, but they find it difficult to continuously maintain knowledge about previous tasks, resulting from overlapping stored information. In this paper, we propose a CL method that preserves previous knowledge as multivariate Gaussian distributions by independently storing the model's outputs per class and continually reproducing them for future tasks. We enhance the discriminability between classes and ensure the plasticity for future tasks by exploiting contrastive learning and representation regularization. The class-wise spatial means and covariances, distinguished in the latent space, are stored in memory, where the previous knowledge is effectively preserved and reproduced for incremental tasks. Extensive experiments on benchmark datasets such as CIFAR-10, CIFAR-100, and ImageNet-100 demonstrate that the proposed method achieves accuracies of 93.21%, 77.57%, and 78.15%, respectively, outperforming state-of-the-art CL methods by 2.34 %p, 2.1 %p, and 1.91 %p. Additionally, it achieves the lowest mean forgetting rates across all datasets.