{"title":"All Points Guided Adversarial Generator for Targeted Attack Against Deep Hashing Retrieval","authors":"Rongxin Tu;Xiangui Kang;Chee Wei Tan;Chi-Hung Chi;Kwok-Yan Lam","doi":"10.1109/TIFS.2025.3534585","DOIUrl":null,"url":null,"abstract":"Deep hashing has been widely used in image retrieval tasks, while deep hashing networks are vulnerable to adversarial example attacks. To improve the deep hashing networks’ robustness, it is essential to investigate adversarial attacks on the networks, especially targeted attacks. Among the existing targeted attacks for hashing, the generation-based targeted attack methods have attracted increasing attention due to their efficiency in generating adversarial examples. However, these methods supervise the generation of adversarial examples solely with the hash codes of positive samples, without employing the hash codes of all points in the training set to directly participate in supervisory training, thereby making the attack less effective. Since the hash codes of the training set samples are generated by a well-trained hashing model, these hash codes retain rich semantic information of their corresponding samples, highlighting the necessity of sufficiently utilizing them. Therefore, in this paper, we propose a targeted attack method that utilizes all points’ hash codes in the training set to guide the generation of adversarial attack examples directly. Specifically, we first decode the target label to obtain the corresponding feature map. Then, we concatenate the feature map with the query image and feed them into an encoder-decoder network that employs a skip-connection strategy to obtain a perturbed example. Furthermore, to guide adversarial example generation, we introduce a loss function that exploits the similarities between the perturbed example’s hash code and all points’ hash codes in the training set, thereby making sufficient utilization of the rich semantic information in these hash codes. Experimental results illustrate that our method outperforms the state-of-the-art targeted attack methods in targeted attack effectiveness and transferability. The code is available at <uri>https://github.com/rongxintu3/APGA</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1695-1709"},"PeriodicalIF":8.0000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10854600/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Deep hashing has been widely used in image retrieval tasks, while deep hashing networks are vulnerable to adversarial example attacks. To improve the deep hashing networks’ robustness, it is essential to investigate adversarial attacks on the networks, especially targeted attacks. Among the existing targeted attacks for hashing, the generation-based targeted attack methods have attracted increasing attention due to their efficiency in generating adversarial examples. However, these methods supervise the generation of adversarial examples solely with the hash codes of positive samples, without employing the hash codes of all points in the training set to directly participate in supervisory training, thereby making the attack less effective. Since the hash codes of the training set samples are generated by a well-trained hashing model, these hash codes retain rich semantic information of their corresponding samples, highlighting the necessity of sufficiently utilizing them. Therefore, in this paper, we propose a targeted attack method that utilizes all points’ hash codes in the training set to guide the generation of adversarial attack examples directly. Specifically, we first decode the target label to obtain the corresponding feature map. Then, we concatenate the feature map with the query image and feed them into an encoder-decoder network that employs a skip-connection strategy to obtain a perturbed example. Furthermore, to guide adversarial example generation, we introduce a loss function that exploits the similarities between the perturbed example’s hash code and all points’ hash codes in the training set, thereby making sufficient utilization of the rich semantic information in these hash codes. Experimental results illustrate that our method outperforms the state-of-the-art targeted attack methods in targeted attack effectiveness and transferability. The code is available at https://github.com/rongxintu3/APGA.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features