{"title":"CapsHash: Deep Supervised Hashing with Capsule Network","authors":"Yang Li, Rui Zhang, Zhuang Miao, Jiabao Wang","doi":"10.1109/WCSP.2019.8927934","DOIUrl":null,"url":null,"abstract":"To better deal with large-scale image retrieval problem, deep hashing models based on convolutional neural network (CNN) have been widely used as effective methods, which can map similar images to compact binary hash codes with smaller hamming distance. Despite their positive results, CNN-based methods have few limitations, which are unable to understand the spatial relationship between features. To overcome this challenge, in this paper, a novel deep supervised hashing method was proposed based on capsule networks. Our method, referred to as CapsHash, can learn discriminative hash codes and capsule vectors at the same time. Moreover, we introduce a novel compound loss function that has two parts: classification hashing loss and margin loss. This compound loss function can greatly improve the discriminative ability of binary codes and further improve the image retrieval performance. Extensive experiments under different scenarios demonstrate that our CapsHash method can preserve the instance-level similarity and outperform previous state-of-the-art hashing approaches. To the best of our knowledge, CapsHash is the first method about the application of capsule networks in the deep supervised hashing domain.","PeriodicalId":108635,"journal":{"name":"2019 11th International Conference on Wireless Communications and Signal Processing (WCSP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 11th International Conference on Wireless Communications and Signal Processing (WCSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCSP.2019.8927934","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
To better deal with large-scale image retrieval problem, deep hashing models based on convolutional neural network (CNN) have been widely used as effective methods, which can map similar images to compact binary hash codes with smaller hamming distance. Despite their positive results, CNN-based methods have few limitations, which are unable to understand the spatial relationship between features. To overcome this challenge, in this paper, a novel deep supervised hashing method was proposed based on capsule networks. Our method, referred to as CapsHash, can learn discriminative hash codes and capsule vectors at the same time. Moreover, we introduce a novel compound loss function that has two parts: classification hashing loss and margin loss. This compound loss function can greatly improve the discriminative ability of binary codes and further improve the image retrieval performance. Extensive experiments under different scenarios demonstrate that our CapsHash method can preserve the instance-level similarity and outperform previous state-of-the-art hashing approaches. To the best of our knowledge, CapsHash is the first method about the application of capsule networks in the deep supervised hashing domain.