Qibing Qin , Hong Wang , Wenfeng Zhang , Lei Huang , Jie Nie
{"title":"Deep Consistent Penalizing Hashing with noise-robust representation for large-scale image retrieval","authors":"Qibing Qin , Hong Wang , Wenfeng Zhang , Lei Huang , Jie Nie","doi":"10.1016/j.neucom.2025.130014","DOIUrl":null,"url":null,"abstract":"<div><div>Benefiting from the powerful representational capacity of deep learning and the attractive computational efficiency of binary codes, deep hashing frameworks have made large progress for large-scale image retrieval applications. By calculating the priori labels, most existing deep supervised hashing usually introduces the effective margin-based objective loss to generate label-level penalizing boundaries for training samples during the model optimization. However, the decision boundaries from label-level penalizing may be inconsistent with semantic relations hidden in raw samples, compromising the performance. Besides, for classes with low intra-class variances or inter-class correlations, the force field of the margin-based methods might be too weak to learn the discriminant embedding space. In this paper, we solve this dilemma with a novel unified deep hashing framework, termed Deep Consistent Penalizing Hashing with noise-robust representation (DCPH), to generate compact yet discriminative binary codes for efficient and accurate image retrieval. Specifically, by learning the unbalanced correlations of training samples, the semantic consistency penalizing loss, consisting of pulling penalizing elements and pushing penalizing elements, is proposed to generate the semantic decision boundaries across classes. For parameter optimization, the dice-like optimization strategy is introduced to balance the pulling and pushing field, facilitating the generation of highly separable Hamming space. Besides, to mitigate the negative influence caused by objective-unrelated information or noise, by introducing patch-wise attention strategy and depth-wise convolution operation, the noise-robust representation module is developed to capture the robust feature descriptor with abundant fine-grained information. Comprehensive evaluations are performed on several benchmark datasets, and the experimental results consistently validate the effectiveness of our proposed DCPH framework, which significantly outperforms the state-of-the-art deep hashing methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"635 ","pages":"Article 130014"},"PeriodicalIF":6.5000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225006861","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Benefiting from the powerful representational capacity of deep learning and the attractive computational efficiency of binary codes, deep hashing frameworks have made large progress for large-scale image retrieval applications. By calculating the priori labels, most existing deep supervised hashing usually introduces the effective margin-based objective loss to generate label-level penalizing boundaries for training samples during the model optimization. However, the decision boundaries from label-level penalizing may be inconsistent with semantic relations hidden in raw samples, compromising the performance. Besides, for classes with low intra-class variances or inter-class correlations, the force field of the margin-based methods might be too weak to learn the discriminant embedding space. In this paper, we solve this dilemma with a novel unified deep hashing framework, termed Deep Consistent Penalizing Hashing with noise-robust representation (DCPH), to generate compact yet discriminative binary codes for efficient and accurate image retrieval. Specifically, by learning the unbalanced correlations of training samples, the semantic consistency penalizing loss, consisting of pulling penalizing elements and pushing penalizing elements, is proposed to generate the semantic decision boundaries across classes. For parameter optimization, the dice-like optimization strategy is introduced to balance the pulling and pushing field, facilitating the generation of highly separable Hamming space. Besides, to mitigate the negative influence caused by objective-unrelated information or noise, by introducing patch-wise attention strategy and depth-wise convolution operation, the noise-robust representation module is developed to capture the robust feature descriptor with abundant fine-grained information. Comprehensive evaluations are performed on several benchmark datasets, and the experimental results consistently validate the effectiveness of our proposed DCPH framework, which significantly outperforms the state-of-the-art deep hashing methods.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.