Planning forward: Deep incremental hashing by gradually defrosting bits

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qinghang Su , Dayan Wu , Chenming Wu , Bo Li , Weiping Wang
{"title":"Planning forward: Deep incremental hashing by gradually defrosting bits","authors":"Qinghang Su ,&nbsp;Dayan Wu ,&nbsp;Chenming Wu ,&nbsp;Bo Li ,&nbsp;Weiping Wang","doi":"10.1016/j.neunet.2025.108123","DOIUrl":null,"url":null,"abstract":"<div><div>Deep incremental hashing can generate hash codes incrementally for new classes, while keeping the existing ones unchanged. Existing methods typically allocate fixed code lengths to all classes, causing the entire Hamming space occupied by existing classes, thus failing to prepare models for future extensions. This significantly limits the ability to effectively accommodate new classes. Beyond that, it is inefficient in computation and storage to use all bits for encoding a few classes in the early sessions. This paper presents <strong>B</strong>it <strong>D</strong>efrosting Deep <strong>I</strong>ncremental <strong>H</strong>ashing (BDIH) to tackle these problems. Our key insight is to map the classes into a small subspace by freezing most hash bits during the first session, which reserves adequate space for future classes. This allows subsequent sessions to map new classes into progressively expanding subspaces by defrosting a portion of the frozen bits. Specifically, we propose a bit-defrosting code learning framework, which includes a bit-defrosting center generation part and a center-based bit-defrosting code learning part. The former part generates hash centers as learning objectives in expanding subspaces while the latter part learns globally discriminative hash codes with the guidance of hash centers and preserves the backward compatibility between the updated model and previously stored codes. As a result, our method achieves comparable performance on old classes using fewer bits while reserving more space for new ones. Extensive experiments demonstrate that BDIH outperforms existing methods regarding retrieval accuracy and storage efficiency in long-sequence incremental learning scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108123"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010032","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Deep incremental hashing can generate hash codes incrementally for new classes, while keeping the existing ones unchanged. Existing methods typically allocate fixed code lengths to all classes, causing the entire Hamming space occupied by existing classes, thus failing to prepare models for future extensions. This significantly limits the ability to effectively accommodate new classes. Beyond that, it is inefficient in computation and storage to use all bits for encoding a few classes in the early sessions. This paper presents Bit Defrosting Deep Incremental Hashing (BDIH) to tackle these problems. Our key insight is to map the classes into a small subspace by freezing most hash bits during the first session, which reserves adequate space for future classes. This allows subsequent sessions to map new classes into progressively expanding subspaces by defrosting a portion of the frozen bits. Specifically, we propose a bit-defrosting code learning framework, which includes a bit-defrosting center generation part and a center-based bit-defrosting code learning part. The former part generates hash centers as learning objectives in expanding subspaces while the latter part learns globally discriminative hash codes with the guidance of hash centers and preserves the backward compatibility between the updated model and previously stored codes. As a result, our method achieves comparable performance on old classes using fewer bits while reserving more space for new ones. Extensive experiments demonstrate that BDIH outperforms existing methods regarding retrieval accuracy and storage efficiency in long-sequence incremental learning scenarios.
向前规划:通过逐渐解冻比特进行深度增量哈希
深度增量哈希可以为新类增量生成哈希码,同时保持现有类不变。现有方法通常为所有类分配固定的代码长度,导致整个汉明空间被现有类占用,从而无法为将来的扩展准备模型。这极大地限制了有效容纳新类的能力。除此之外,在早期会话中使用所有位来编码几个类在计算和存储方面效率低下。本文提出了比特除霜深度增量哈希(BDIH)来解决这些问题。我们的关键见解是通过在第一次会话期间冻结大多数哈希位,将类映射到一个小的子空间,这为将来的类保留了足够的空间。这允许随后的会话通过解冻冻结位的一部分将新类映射到逐步扩展的子空间。具体来说,我们提出了一种比特除霜代码学习框架,该框架包括比特除霜中心生成部分和基于中心的比特除霜代码学习部分。前者在扩展子空间中生成哈希中心作为学习目标,后者在哈希中心的指导下学习全局判别哈希码,并保持更新后的模型与之前存储的代码的向后兼容性。因此,我们的方法使用更少的比特在旧类上实现了相当的性能,同时为新类保留了更多的空间。大量实验表明,在长序列增量学习场景下,BDIH在检索精度和存储效率方面优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信