{"title":"High-Quality-High-Quantity Semantic Distillation for Incremental Object Detection","authors":"Mengxue Kang, Jinpeng Zhang, Xiashuang Wang, Xuhui Huang","doi":"10.1145/3579654.3579682","DOIUrl":null,"url":null,"abstract":"Model is required to learn from dynamic data stream under incremental object detection task. However, traditional object detection model fails to deal with this scenario. Fine-tuning on new task suffers from a fast performance decay of early learned tasks, which is known as catastrophic forgetting. A promising way to alleviate catastrophic forgetting is knowledge distillation, which includes feature distillation and response distillation. Previous feature distillation methods have not discuss knowledge selection and knowledge transfer at the same time. In this paper, we propose high-level semantic feature distillation and task re-balancing strategy that consider both high-quality knowledge selection and high-quantity knowledge transfer simultaneously. Extensive experiments are conducted on MS COCO benchmarks. The performance of our method exceeds previous SOTA methods under all experimental scenarios. Remarkably, our method reduces the mAP gap toward full-training to 2.58, which is much better than that of the previous SOTA method with a gap of 3.30.","PeriodicalId":146783,"journal":{"name":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","volume":"267 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579654.3579682","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Model is required to learn from dynamic data stream under incremental object detection task. However, traditional object detection model fails to deal with this scenario. Fine-tuning on new task suffers from a fast performance decay of early learned tasks, which is known as catastrophic forgetting. A promising way to alleviate catastrophic forgetting is knowledge distillation, which includes feature distillation and response distillation. Previous feature distillation methods have not discuss knowledge selection and knowledge transfer at the same time. In this paper, we propose high-level semantic feature distillation and task re-balancing strategy that consider both high-quality knowledge selection and high-quantity knowledge transfer simultaneously. Extensive experiments are conducted on MS COCO benchmarks. The performance of our method exceeds previous SOTA methods under all experimental scenarios. Remarkably, our method reduces the mAP gap toward full-training to 2.58, which is much better than that of the previous SOTA method with a gap of 3.30.