{"title":"Semantic-Consistent Deep Quantization for Cross-modal Retrieval","authors":"Liya Ma, N. Zhang, Kuang-I Shu, Xitao Zou","doi":"10.1109/ICICIP53388.2021.9642180","DOIUrl":null,"url":null,"abstract":"With making up for the deficiency of the constraint representation capability of hashing codes for high-dimensional data, the quantization method has been found to generally perform better in cross-modal similarity retrieval research. However, in current quantization approaches, the codebook, as the most critical basis for quantization, is still in a passive status and detached from the learning framework. To improve the initiative of codebook, we propose a semantic-consistent deep quantization (SCDQ), which is the first scheme to integrate quantization into deep network learning in an end-to-end fashion. Specifically, two classifiers following the deep representation learning networks are formulated to produce the class-wise abstract patterns with the help of label alignment. Meanwhile, our approach learns a collaborative codebook for both modalities, which embeds bimodality semantic consistent information in codewords and bridges the relationship between the patterns in classifiers and codewords in codebook. By designing a novel algorithm architecture and codebook update strategy, SCDQ enables effective and efficient cross-modal retrieval in an asymmetric way. Extensive experiments on two benchmark datasets demonstrate that SCDQ yields optimal cross-modal retrieval performance and outperforms several state of-the-art cross-modal retrieval methods.","PeriodicalId":435799,"journal":{"name":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICIP53388.2021.9642180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With making up for the deficiency of the constraint representation capability of hashing codes for high-dimensional data, the quantization method has been found to generally perform better in cross-modal similarity retrieval research. However, in current quantization approaches, the codebook, as the most critical basis for quantization, is still in a passive status and detached from the learning framework. To improve the initiative of codebook, we propose a semantic-consistent deep quantization (SCDQ), which is the first scheme to integrate quantization into deep network learning in an end-to-end fashion. Specifically, two classifiers following the deep representation learning networks are formulated to produce the class-wise abstract patterns with the help of label alignment. Meanwhile, our approach learns a collaborative codebook for both modalities, which embeds bimodality semantic consistent information in codewords and bridges the relationship between the patterns in classifiers and codewords in codebook. By designing a novel algorithm architecture and codebook update strategy, SCDQ enables effective and efficient cross-modal retrieval in an asymmetric way. Extensive experiments on two benchmark datasets demonstrate that SCDQ yields optimal cross-modal retrieval performance and outperforms several state of-the-art cross-modal retrieval methods.