Qinghang Su , Dayan Wu , Chenming Wu , Bo Li , Weiping Wang
{"title":"Planning forward: Deep incremental hashing by gradually defrosting bits","authors":"Qinghang Su , Dayan Wu , Chenming Wu , Bo Li , Weiping Wang","doi":"10.1016/j.neunet.2025.108123","DOIUrl":null,"url":null,"abstract":"<div><div>Deep incremental hashing can generate hash codes incrementally for new classes, while keeping the existing ones unchanged. Existing methods typically allocate fixed code lengths to all classes, causing the entire Hamming space occupied by existing classes, thus failing to prepare models for future extensions. This significantly limits the ability to effectively accommodate new classes. Beyond that, it is inefficient in computation and storage to use all bits for encoding a few classes in the early sessions. This paper presents <strong>B</strong>it <strong>D</strong>efrosting Deep <strong>I</strong>ncremental <strong>H</strong>ashing (BDIH) to tackle these problems. Our key insight is to map the classes into a small subspace by freezing most hash bits during the first session, which reserves adequate space for future classes. This allows subsequent sessions to map new classes into progressively expanding subspaces by defrosting a portion of the frozen bits. Specifically, we propose a bit-defrosting code learning framework, which includes a bit-defrosting center generation part and a center-based bit-defrosting code learning part. The former part generates hash centers as learning objectives in expanding subspaces while the latter part learns globally discriminative hash codes with the guidance of hash centers and preserves the backward compatibility between the updated model and previously stored codes. As a result, our method achieves comparable performance on old classes using fewer bits while reserving more space for new ones. Extensive experiments demonstrate that BDIH outperforms existing methods regarding retrieval accuracy and storage efficiency in long-sequence incremental learning scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108123"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010032","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep incremental hashing can generate hash codes incrementally for new classes, while keeping the existing ones unchanged. Existing methods typically allocate fixed code lengths to all classes, causing the entire Hamming space occupied by existing classes, thus failing to prepare models for future extensions. This significantly limits the ability to effectively accommodate new classes. Beyond that, it is inefficient in computation and storage to use all bits for encoding a few classes in the early sessions. This paper presents Bit Defrosting Deep Incremental Hashing (BDIH) to tackle these problems. Our key insight is to map the classes into a small subspace by freezing most hash bits during the first session, which reserves adequate space for future classes. This allows subsequent sessions to map new classes into progressively expanding subspaces by defrosting a portion of the frozen bits. Specifically, we propose a bit-defrosting code learning framework, which includes a bit-defrosting center generation part and a center-based bit-defrosting code learning part. The former part generates hash centers as learning objectives in expanding subspaces while the latter part learns globally discriminative hash codes with the guidance of hash centers and preserves the backward compatibility between the updated model and previously stored codes. As a result, our method achieves comparable performance on old classes using fewer bits while reserving more space for new ones. Extensive experiments demonstrate that BDIH outperforms existing methods regarding retrieval accuracy and storage efficiency in long-sequence incremental learning scenarios.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.