{"title":"Class Incremental Learning for Visual Task using Knowledge Distillation","authors":"Usman Tahir, Amanullah Yasin, Ahmad Jalal","doi":"10.1109/INMIC56986.2022.9972924","DOIUrl":null,"url":null,"abstract":"The Artificial Agent's ability to enhance knowledge incrementally for new data is challenging in class incremental learning because of catastrophic forgetting in which new classes make the trained model quickly forget old classes knowledge. Knowledge distilling techniques and keeping subset of data from the old classes have been proposed to revamp models to accommodate new classes. These techniques allow models to sustain their knowledge without forgetting everything they already know but somewhat alleviate the catastrophic forgetting problem. In this study we propose class incremental learning using bi-distillation (CILBD) method that effectively learn not only the classes of the new data but also previously learned classes. The proposed architecture uses knowledge distillation in such a way that the student model directly learns knowledge from two teacher model and thus alleviate the forgetting of the old class. Our experiments on the iCIFAR-100 dataset showed that the proposed method is more accurate at classifying, forgets less, and works better than state-of-the-art methods.","PeriodicalId":404424,"journal":{"name":"2022 24th International Multitopic Conference (INMIC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 24th International Multitopic Conference (INMIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INMIC56986.2022.9972924","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The Artificial Agent's ability to enhance knowledge incrementally for new data is challenging in class incremental learning because of catastrophic forgetting in which new classes make the trained model quickly forget old classes knowledge. Knowledge distilling techniques and keeping subset of data from the old classes have been proposed to revamp models to accommodate new classes. These techniques allow models to sustain their knowledge without forgetting everything they already know but somewhat alleviate the catastrophic forgetting problem. In this study we propose class incremental learning using bi-distillation (CILBD) method that effectively learn not only the classes of the new data but also previously learned classes. The proposed architecture uses knowledge distillation in such a way that the student model directly learns knowledge from two teacher model and thus alleviate the forgetting of the old class. Our experiments on the iCIFAR-100 dataset showed that the proposed method is more accurate at classifying, forgets less, and works better than state-of-the-art methods.