{"title":"Lightweight Class Incremental Semantic Segmentation Without Catastrophic Forgetting","authors":"Wei Cong;Yang Cong;Yu Ren","doi":"10.1109/TIP.2025.3588065","DOIUrl":null,"url":null,"abstract":"Class incremental semantic segmentation (CISS) aims to progressively segment newly introduced classes while preserving the memory of previously learned ones. Traditional CISS methods directly employ advanced semantic segmentation models (e.g., Deeplab-v3) as continual learners. However, these methods require substantial computational and memory resources, limiting their deployment on edge devices. In this paper, we propose a Lightweight Class Incremental Semantic Segmentation (LISS) model tailored for resource-constrained scenarios. Specifically, we design an automatic knowledge-preservation pruning strategy based on the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which automatically compresses the CISS model by searching for global penalty coefficients. Nonetheless, reducing model parameters exacerbates catastrophic forgetting during incremental learning. To mitigate this challenge, we develop a clustering-based pseudo labels generator to obtain high-quality pseudo labels by considering the feature space structure of old classes. It adjusts predicted probabilities from the old model according to the feature proximity to nearest sub-cluster centers for each class. Additionally, we introduce a customized soft labels module that distills the semantic relationships between classes separately. It decomposes soft labels into target probabilities, background probabilities, and other probabilities, thereby maintaining knowledge of previously learned classes in a fine-grained manner. Extensive experiments on two benchmark datasets demonstrate that our LISS model outperforms state-of-the-art approaches in both effectiveness and efficiency.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4566-4579"},"PeriodicalIF":13.7000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11082484/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Class incremental semantic segmentation (CISS) aims to progressively segment newly introduced classes while preserving the memory of previously learned ones. Traditional CISS methods directly employ advanced semantic segmentation models (e.g., Deeplab-v3) as continual learners. However, these methods require substantial computational and memory resources, limiting their deployment on edge devices. In this paper, we propose a Lightweight Class Incremental Semantic Segmentation (LISS) model tailored for resource-constrained scenarios. Specifically, we design an automatic knowledge-preservation pruning strategy based on the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which automatically compresses the CISS model by searching for global penalty coefficients. Nonetheless, reducing model parameters exacerbates catastrophic forgetting during incremental learning. To mitigate this challenge, we develop a clustering-based pseudo labels generator to obtain high-quality pseudo labels by considering the feature space structure of old classes. It adjusts predicted probabilities from the old model according to the feature proximity to nearest sub-cluster centers for each class. Additionally, we introduce a customized soft labels module that distills the semantic relationships between classes separately. It decomposes soft labels into target probabilities, background probabilities, and other probabilities, thereby maintaining knowledge of previously learned classes in a fine-grained manner. Extensive experiments on two benchmark datasets demonstrate that our LISS model outperforms state-of-the-art approaches in both effectiveness and efficiency.