快速部分模态在线跨模态哈希

IF 13.7
Fengling Li;Yang Sun;Tianshi Wang;Lei Zhu;Xiaojun Chang
{"title":"快速部分模态在线跨模态哈希","authors":"Fengling Li;Yang Sun;Tianshi Wang;Lei Zhu;Xiaojun Chang","doi":"10.1109/TIP.2025.3586504","DOIUrl":null,"url":null,"abstract":"Cross-Modal Hashing (CMH) has become a powerful technique for large-scale cross-modal retrieval, offering benefits like fast computation and efficient storage. However, most CMH models struggle to adapt to streaming multimodal data in real-time once deployed. Although recent online CMH studies have made progress in this area, they often overlook two key challenges: 1) learning effectively from streaming partial-modal multimodal data, and 2) avoiding the high costs associated with frequent hash function re-training and large-scale updates to database hash codes. To address these issues, we propose Fast Partial-modal Online Cross-Modal Hashing (FPO-CMH), the first approach to tackle online cross-modal hash learning with partial-modal data. This marks a significant shift from previous methods that rely on fully-available multimodal data. Specifically, our approach introduces a multimodal dual-tier anchor bank, initialized using offline training data, which allows offline-trained CMH models to adapt seamlessly to partial-modal data while progressively updating the anchor bank. By leveraging gradient accumulation and asynchronous optimization, FPO-CMH facilitates efficient online cross-modal hash learning. Additionally, an initial-anchor rehearsal strategy is employed to prevent model catastrophic forgetting during online optimization, ensuring the code invariance of database hash codes and eliminating the need for frequent hash function re-training. Extensive experiments validate the superiority of FPO-CMH, especially in handling streaming partial-modal multimodal data, a more realistic scenario. The source codes and datasets are available at <uri>https://github.com/DandelionWow/FPO-CMH</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4440-4455"},"PeriodicalIF":13.7000,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast Partial-Modal Online Cross-Modal Hashing\",\"authors\":\"Fengling Li;Yang Sun;Tianshi Wang;Lei Zhu;Xiaojun Chang\",\"doi\":\"10.1109/TIP.2025.3586504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cross-Modal Hashing (CMH) has become a powerful technique for large-scale cross-modal retrieval, offering benefits like fast computation and efficient storage. However, most CMH models struggle to adapt to streaming multimodal data in real-time once deployed. Although recent online CMH studies have made progress in this area, they often overlook two key challenges: 1) learning effectively from streaming partial-modal multimodal data, and 2) avoiding the high costs associated with frequent hash function re-training and large-scale updates to database hash codes. To address these issues, we propose Fast Partial-modal Online Cross-Modal Hashing (FPO-CMH), the first approach to tackle online cross-modal hash learning with partial-modal data. This marks a significant shift from previous methods that rely on fully-available multimodal data. Specifically, our approach introduces a multimodal dual-tier anchor bank, initialized using offline training data, which allows offline-trained CMH models to adapt seamlessly to partial-modal data while progressively updating the anchor bank. By leveraging gradient accumulation and asynchronous optimization, FPO-CMH facilitates efficient online cross-modal hash learning. Additionally, an initial-anchor rehearsal strategy is employed to prevent model catastrophic forgetting during online optimization, ensuring the code invariance of database hash codes and eliminating the need for frequent hash function re-training. Extensive experiments validate the superiority of FPO-CMH, especially in handling streaming partial-modal multimodal data, a more realistic scenario. The source codes and datasets are available at <uri>https://github.com/DandelionWow/FPO-CMH</uri>\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"34 \",\"pages\":\"4440-4455\"},\"PeriodicalIF\":13.7000,\"publicationDate\":\"2025-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11079873/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11079873/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

跨模态哈希(CMH)已经成为一种强大的大规模跨模态检索技术,它提供了快速计算和高效存储等优点。然而,一旦部署,大多数CMH模型都难以适应实时的多模态数据流。尽管最近的在线CMH研究在这一领域取得了进展,但他们往往忽视了两个关键挑战:1)从流的部分模态多模态数据中有效地学习,2)避免与频繁的哈希函数重新训练和大规模更新数据库哈希码相关的高成本。为了解决这些问题,我们提出了快速部分模态在线跨模态哈希(FPO-CMH),这是解决部分模态数据在线跨模态哈希学习的第一种方法。这标志着以前依赖于完全可用的多模态数据的方法的重大转变。具体来说,我们的方法引入了一个多模态双层锚库,使用离线训练数据进行初始化,这使得离线训练的CMH模型能够无缝地适应部分模态数据,同时逐步更新锚库。通过利用梯度积累和异步优化,FPO-CMH促进了高效的在线跨模态哈希学习。此外,采用初始锚点预演策略防止在线优化过程中的模型灾难性遗忘,保证了数据库哈希码的代码不变性,消除了频繁重新训练哈希函数的需要。大量的实验验证了FPO-CMH的优越性,特别是在处理流的部分模态多模态数据方面,这是一个更现实的场景。源代码和数据集可在https://github.com/DandelionWow/FPO-CMH上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fast Partial-Modal Online Cross-Modal Hashing
Cross-Modal Hashing (CMH) has become a powerful technique for large-scale cross-modal retrieval, offering benefits like fast computation and efficient storage. However, most CMH models struggle to adapt to streaming multimodal data in real-time once deployed. Although recent online CMH studies have made progress in this area, they often overlook two key challenges: 1) learning effectively from streaming partial-modal multimodal data, and 2) avoiding the high costs associated with frequent hash function re-training and large-scale updates to database hash codes. To address these issues, we propose Fast Partial-modal Online Cross-Modal Hashing (FPO-CMH), the first approach to tackle online cross-modal hash learning with partial-modal data. This marks a significant shift from previous methods that rely on fully-available multimodal data. Specifically, our approach introduces a multimodal dual-tier anchor bank, initialized using offline training data, which allows offline-trained CMH models to adapt seamlessly to partial-modal data while progressively updating the anchor bank. By leveraging gradient accumulation and asynchronous optimization, FPO-CMH facilitates efficient online cross-modal hash learning. Additionally, an initial-anchor rehearsal strategy is employed to prevent model catastrophic forgetting during online optimization, ensuring the code invariance of database hash codes and eliminating the need for frequent hash function re-training. Extensive experiments validate the superiority of FPO-CMH, especially in handling streaming partial-modal multimodal data, a more realistic scenario. The source codes and datasets are available at https://github.com/DandelionWow/FPO-CMH
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信