一种用于大规模人脸检索的压缩哈希算法

Jiayong Li, Wing W. Y. Ng, Xing Tian
{"title":"一种用于大规模人脸检索的压缩哈希算法","authors":"Jiayong Li, Wing W. Y. Ng, Xing Tian","doi":"10.1109/ICIST.2018.8426095","DOIUrl":null,"url":null,"abstract":"Hashing method has the intrinsic problem that a long binary code yields better precision but requires a larger storage cost. Most of existing hashing methods aim to find an optimal code length to trade off the precision and storage. However, in reality, the scale of the face images is enormous and thus the storage burden is unimaginative heavy. We propose to apply a similarity-preserving compression scheme to existing unsupervised hashing methods, so as to reduce storage burden while maintaining a high precision. We employ two different lengths of code, including a long code with original length and a short code with length after m-time compression. The hash code for the query face preserves the original code length while the hash code for stored image is compressed with a ratio m to reduce storage cost. When performing face retrieval, the compressed hash code for the stored face is m-time repeatedly concentrated, in order to be compared with the long hash code for the query based on Hamming distance. Experimental results on large-scale retrieval demonstrate that the proposed compression scheme can be efficiently applied in existing methods and achieves both a high precision and a small storage space.","PeriodicalId":331555,"journal":{"name":"2018 Eighth International Conference on Information Science and Technology (ICIST)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Compression Hashing Scheme for Large-Scale Face Retrieval\",\"authors\":\"Jiayong Li, Wing W. Y. Ng, Xing Tian\",\"doi\":\"10.1109/ICIST.2018.8426095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hashing method has the intrinsic problem that a long binary code yields better precision but requires a larger storage cost. Most of existing hashing methods aim to find an optimal code length to trade off the precision and storage. However, in reality, the scale of the face images is enormous and thus the storage burden is unimaginative heavy. We propose to apply a similarity-preserving compression scheme to existing unsupervised hashing methods, so as to reduce storage burden while maintaining a high precision. We employ two different lengths of code, including a long code with original length and a short code with length after m-time compression. The hash code for the query face preserves the original code length while the hash code for stored image is compressed with a ratio m to reduce storage cost. When performing face retrieval, the compressed hash code for the stored face is m-time repeatedly concentrated, in order to be compared with the long hash code for the query based on Hamming distance. Experimental results on large-scale retrieval demonstrate that the proposed compression scheme can be efficiently applied in existing methods and achieves both a high precision and a small storage space.\",\"PeriodicalId\":331555,\"journal\":{\"name\":\"2018 Eighth International Conference on Information Science and Technology (ICIST)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Eighth International Conference on Information Science and Technology (ICIST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIST.2018.8426095\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Eighth International Conference on Information Science and Technology (ICIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIST.2018.8426095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

哈希方法有一个固有的问题,即长二进制代码产生更好的精度,但需要更大的存储成本。大多数现有的散列方法的目标是找到一个最优的代码长度来权衡精度和存储。然而,在现实中,人脸图像的规模巨大,因此存储负担是难以想象的沉重。我们提出在现有的无监督哈希方法中应用保持相似度的压缩方案,以减少存储负担,同时保持较高的精度。我们采用了两种不同长度的编码,包括具有原始长度的长编码和经过m时间压缩后具有长度的短编码。查询面哈希码保留原始码长,存储图像哈希码按比例m压缩,以降低存储成本。在进行人脸检索时,对存储人脸的压缩哈希码进行m-time重复集中,以便与基于汉明距离的查询的长哈希码进行比较。大规模检索实验结果表明,所提出的压缩方案可以有效地应用于现有的压缩方法中,既能达到较高的检索精度,又能占用较小的存储空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Compression Hashing Scheme for Large-Scale Face Retrieval
Hashing method has the intrinsic problem that a long binary code yields better precision but requires a larger storage cost. Most of existing hashing methods aim to find an optimal code length to trade off the precision and storage. However, in reality, the scale of the face images is enormous and thus the storage burden is unimaginative heavy. We propose to apply a similarity-preserving compression scheme to existing unsupervised hashing methods, so as to reduce storage burden while maintaining a high precision. We employ two different lengths of code, including a long code with original length and a short code with length after m-time compression. The hash code for the query face preserves the original code length while the hash code for stored image is compressed with a ratio m to reduce storage cost. When performing face retrieval, the compressed hash code for the stored face is m-time repeatedly concentrated, in order to be compared with the long hash code for the query based on Hamming distance. Experimental results on large-scale retrieval demonstrate that the proposed compression scheme can be efficiently applied in existing methods and achieves both a high precision and a small storage space.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信