{"title":"非样例类增量哈希的边界感知原型增强和双层知识蒸馏","authors":"Qinghang Su , Dayan Wu , Bo Li","doi":"10.1016/j.knosys.2025.113520","DOIUrl":null,"url":null,"abstract":"<div><div>Deep hashing methods are extensively applied in image retrieval for their efficiency and low storage demands. Recently, deep incremental hashing methods have addressed the challenge of adapting to new classes in non-stationary environments, while ensuring compatibility with existing classes. However, most of these methods require old-class samples for joint training to resist catastrophic forgetting, which is not always feasible due to privacy concerns. This constraint underscores the need for Non-Exemplar Class-Incremental Hashing (NECIH) approaches, designed to retain knowledge without storing old-class samples. In NECIH methodologies, hash prototypes are commonly employed to maintain the discriminability of hash codes. However, these prototypes often fail to represent old-class distribution accurately, causing confusion between old and new classes. Furthermore, traditional instance-level knowledge distillation techniques are insufficient for efficiently transferring the structural information inherent in the feature space. To tackle these challenges, we introduce a novel deep incremental hashing approach called Boundary-aware <strong>P</strong>rototype <strong>A</strong>ugmentation and Dual-level Knowledge <strong>D</strong>istillation for NEC<strong>IH</strong> (PADIH). PADIH comprises three key components: the Prototype-based Code Learning (PCL) module, the Boundary-aware Prototype Augmentation (BPA) module, and the Dual-level Knowledge Distillation (DKD) module. Specifically, the PCL module learns discriminative hash codes for new classes, while the BPA module augments the old-class prototypes into pseudo codes, with an emphasis on the distribution boundaries. Moreover, the DKD module integrates both instance-level and relation-level knowledge distillation to facilitate the transfer of comprehensive information between models. Extensive experiments conducted on four benchmarks across twelve incremental learning situations demonstrate the superior performance of PADIH.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"318 ","pages":"Article 113520"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Boundary-aware Prototype Augmentation and Dual-level Knowledge Distillation for Non-Exemplar Class-Incremental Hashing\",\"authors\":\"Qinghang Su , Dayan Wu , Bo Li\",\"doi\":\"10.1016/j.knosys.2025.113520\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep hashing methods are extensively applied in image retrieval for their efficiency and low storage demands. Recently, deep incremental hashing methods have addressed the challenge of adapting to new classes in non-stationary environments, while ensuring compatibility with existing classes. However, most of these methods require old-class samples for joint training to resist catastrophic forgetting, which is not always feasible due to privacy concerns. This constraint underscores the need for Non-Exemplar Class-Incremental Hashing (NECIH) approaches, designed to retain knowledge without storing old-class samples. In NECIH methodologies, hash prototypes are commonly employed to maintain the discriminability of hash codes. However, these prototypes often fail to represent old-class distribution accurately, causing confusion between old and new classes. Furthermore, traditional instance-level knowledge distillation techniques are insufficient for efficiently transferring the structural information inherent in the feature space. To tackle these challenges, we introduce a novel deep incremental hashing approach called Boundary-aware <strong>P</strong>rototype <strong>A</strong>ugmentation and Dual-level Knowledge <strong>D</strong>istillation for NEC<strong>IH</strong> (PADIH). PADIH comprises three key components: the Prototype-based Code Learning (PCL) module, the Boundary-aware Prototype Augmentation (BPA) module, and the Dual-level Knowledge Distillation (DKD) module. Specifically, the PCL module learns discriminative hash codes for new classes, while the BPA module augments the old-class prototypes into pseudo codes, with an emphasis on the distribution boundaries. Moreover, the DKD module integrates both instance-level and relation-level knowledge distillation to facilitate the transfer of comprehensive information between models. Extensive experiments conducted on four benchmarks across twelve incremental learning situations demonstrate the superior performance of PADIH.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"318 \",\"pages\":\"Article 113520\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125005660\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125005660","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Boundary-aware Prototype Augmentation and Dual-level Knowledge Distillation for Non-Exemplar Class-Incremental Hashing
Deep hashing methods are extensively applied in image retrieval for their efficiency and low storage demands. Recently, deep incremental hashing methods have addressed the challenge of adapting to new classes in non-stationary environments, while ensuring compatibility with existing classes. However, most of these methods require old-class samples for joint training to resist catastrophic forgetting, which is not always feasible due to privacy concerns. This constraint underscores the need for Non-Exemplar Class-Incremental Hashing (NECIH) approaches, designed to retain knowledge without storing old-class samples. In NECIH methodologies, hash prototypes are commonly employed to maintain the discriminability of hash codes. However, these prototypes often fail to represent old-class distribution accurately, causing confusion between old and new classes. Furthermore, traditional instance-level knowledge distillation techniques are insufficient for efficiently transferring the structural information inherent in the feature space. To tackle these challenges, we introduce a novel deep incremental hashing approach called Boundary-aware Prototype Augmentation and Dual-level Knowledge Distillation for NECIH (PADIH). PADIH comprises three key components: the Prototype-based Code Learning (PCL) module, the Boundary-aware Prototype Augmentation (BPA) module, and the Dual-level Knowledge Distillation (DKD) module. Specifically, the PCL module learns discriminative hash codes for new classes, while the BPA module augments the old-class prototypes into pseudo codes, with an emphasis on the distribution boundaries. Moreover, the DKD module integrates both instance-level and relation-level knowledge distillation to facilitate the transfer of comprehensive information between models. Extensive experiments conducted on four benchmarks across twelve incremental learning situations demonstrate the superior performance of PADIH.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.