{"title":"Alleviating Over-Fitting in Hashing-Based Fine-Grained Image Retrieval: From Causal Feature Learning to Binary-Injected Hash Learning","authors":"Xinguang Xiang;Xinhao Ding;Lu Jin;Zechao Li;Jinhui Tang;Ramesh Jain","doi":"10.1109/TMM.2024.3410136","DOIUrl":null,"url":null,"abstract":"Hashing-based fine-grained image retrieval pursues learning diverse local features to generate inter-class discriminative hash codes. However, existing fine-grained hash methods with attention mechanisms usually tend to just focus on a few obvious areas, which misguides the network to over-fit some salient features. Such a problem raises two main limitations. 1) It overlooks some subtle local features, degrading the generalization capability of learned embedding. 2) It causes the over-activation of some hash bits correlated to salient features, which breaks the binary code balance and further weakens the discrimination abilities of hash codes. To address these limitations of the over-fitting problem, we propose a novel hash framework from \n<bold>C</b>\nausal \n<bold>F</b>\neature learning to \n<bold>B</b>\ninary-injected \n<bold>H</b>\nash learning (\n<bold>CFBH</b>\n), which captures various local information and suppresses over-activated hash bits simultaneously. For causal feature learning, we adopt causal inference theory to alleviate the bias towards the salient regions in fine-grained images. In detail, we obtain local features from the feature map and combine this local information with original image information followed by this theory. Theoretically, these fused embeddings help the network to re-weight the retrieval effort of each local feature and exploit more subtle variations without observational bias. For binary-injected hash learning, we propose a Binary Noise Injection (BNI) module inspired by Dropout. The BNI module not only mitigates over-activation to particular bits, but also makes hash codes uncorrelated and balanced in the Hamming space. Extensive experimental results on six popular fine-grained image datasets demonstrate the superiority of CFBH over several State-of-the-Art methods.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"26 ","pages":"10665-10677"},"PeriodicalIF":8.4000,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10566715/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Hashing-based fine-grained image retrieval pursues learning diverse local features to generate inter-class discriminative hash codes. However, existing fine-grained hash methods with attention mechanisms usually tend to just focus on a few obvious areas, which misguides the network to over-fit some salient features. Such a problem raises two main limitations. 1) It overlooks some subtle local features, degrading the generalization capability of learned embedding. 2) It causes the over-activation of some hash bits correlated to salient features, which breaks the binary code balance and further weakens the discrimination abilities of hash codes. To address these limitations of the over-fitting problem, we propose a novel hash framework from
C
ausal
F
eature learning to
B
inary-injected
H
ash learning (
CFBH
), which captures various local information and suppresses over-activated hash bits simultaneously. For causal feature learning, we adopt causal inference theory to alleviate the bias towards the salient regions in fine-grained images. In detail, we obtain local features from the feature map and combine this local information with original image information followed by this theory. Theoretically, these fused embeddings help the network to re-weight the retrieval effort of each local feature and exploit more subtle variations without observational bias. For binary-injected hash learning, we propose a Binary Noise Injection (BNI) module inspired by Dropout. The BNI module not only mitigates over-activation to particular bits, but also makes hash codes uncorrelated and balanced in the Hamming space. Extensive experimental results on six popular fine-grained image datasets demonstrate the superiority of CFBH over several State-of-the-Art methods.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.