{"title":"通过对比互补增强的课堂增量学习","authors":"Xi Wang;Xu Yang;Kun Wei;Yanan Gu;Cheng Deng","doi":"10.1109/TIP.2025.3574930","DOIUrl":null,"url":null,"abstract":"Class incremental learning (CIL) endeavors to acquire new knowledge continuously from an unending data stream while retaining previously acquired knowledge. Since the amount of new data is significantly smaller than that of old data, existing methods struggle to strike a balance between acquiring new knowledge and retaining previously learned knowledge, leading to substantial performance degradation. To tackle such a dilemma, in this paper, we propose the <bold>Co</b>ntrastive <bold>Co</b>mplementary <bold>A</b>ugmentation <bold>L</b>earning (<bold>CoLA</b>) method, which mitigates the aliasing of distributions in incremental tasks. Specifically, we introduce a novel yet effective supervised contrastive learning module with instance- and class-level augmentation during base training. For the instance-level augmentation method, we spatially segment the image at different scales, creating spatial pyramid contrastive pairs to obtain more robust feature representations. Meanwhile, the class-level augmentation method randomly mixes images within the mini-batch, facilitating the learning of compact and more easily adaptable decision boundaries. In this way, we only need to train the classifier to maintain competitive performance during the incremental phases. Furthermore, we also propose CoLA+ to further enhance the proposed method with relaxed limitations on data storage. Extensive experiments demonstrate that our method achieves state-of-the-art performance on different benchmarks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3663-3673"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Class Incremental Learning via Contrastive Complementary Augmentation\",\"authors\":\"Xi Wang;Xu Yang;Kun Wei;Yanan Gu;Cheng Deng\",\"doi\":\"10.1109/TIP.2025.3574930\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Class incremental learning (CIL) endeavors to acquire new knowledge continuously from an unending data stream while retaining previously acquired knowledge. Since the amount of new data is significantly smaller than that of old data, existing methods struggle to strike a balance between acquiring new knowledge and retaining previously learned knowledge, leading to substantial performance degradation. To tackle such a dilemma, in this paper, we propose the <bold>Co</b>ntrastive <bold>Co</b>mplementary <bold>A</b>ugmentation <bold>L</b>earning (<bold>CoLA</b>) method, which mitigates the aliasing of distributions in incremental tasks. Specifically, we introduce a novel yet effective supervised contrastive learning module with instance- and class-level augmentation during base training. For the instance-level augmentation method, we spatially segment the image at different scales, creating spatial pyramid contrastive pairs to obtain more robust feature representations. Meanwhile, the class-level augmentation method randomly mixes images within the mini-batch, facilitating the learning of compact and more easily adaptable decision boundaries. In this way, we only need to train the classifier to maintain competitive performance during the incremental phases. Furthermore, we also propose CoLA+ to further enhance the proposed method with relaxed limitations on data storage. Extensive experiments demonstrate that our method achieves state-of-the-art performance on different benchmarks.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"3663-3673\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11024135/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11024135/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Class Incremental Learning via Contrastive Complementary Augmentation
Class incremental learning (CIL) endeavors to acquire new knowledge continuously from an unending data stream while retaining previously acquired knowledge. Since the amount of new data is significantly smaller than that of old data, existing methods struggle to strike a balance between acquiring new knowledge and retaining previously learned knowledge, leading to substantial performance degradation. To tackle such a dilemma, in this paper, we propose the Contrastive Complementary Augmentation Learning (CoLA) method, which mitigates the aliasing of distributions in incremental tasks. Specifically, we introduce a novel yet effective supervised contrastive learning module with instance- and class-level augmentation during base training. For the instance-level augmentation method, we spatially segment the image at different scales, creating spatial pyramid contrastive pairs to obtain more robust feature representations. Meanwhile, the class-level augmentation method randomly mixes images within the mini-batch, facilitating the learning of compact and more easily adaptable decision boundaries. In this way, we only need to train the classifier to maintain competitive performance during the incremental phases. Furthermore, we also propose CoLA+ to further enhance the proposed method with relaxed limitations on data storage. Extensive experiments demonstrate that our method achieves state-of-the-art performance on different benchmarks.