{"title":"对抗扰动攻击基于dnn的跨模态检索哈希框架","authors":"Xingwei Zhang, Xiaolong Zheng, W. Mao","doi":"10.1109/ISI53945.2021.9624750","DOIUrl":null,"url":null,"abstract":"The rapid development of Internet and online data explosions elicit strong aspirations for users to search for semantic relevant information based on available samples. While the data online are always released with different modalities like images, videos or texts, effective retrieval models should discover latent semantic information with different structures. Recently, the state-of-the-art deep cross modal retrieval frameworks have effectively enhanced the performance on commonly-used platforms using the deep neural networks (DNNs). Yet DNNs have been verified to be easily misguided by small perturbations, and there are already several attack generation methods proposed on DNN-based models for real-world tasks, but they are all focused on supervised tasks like classification or object recognition. To effectively evaluate the robustness of deep cross-modal retrieval frameworks, in this paper, we propose a retrieval-based adversarial perturbation generation method, and demonstrate that our perturbation could effectively attack the state-of-the-art deep cross-modal and single image retrieval hashing models.","PeriodicalId":347770,"journal":{"name":"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attacking DNN-based Cross-modal Retrieval Hashing Framework with Adversarial Perturbations\",\"authors\":\"Xingwei Zhang, Xiaolong Zheng, W. Mao\",\"doi\":\"10.1109/ISI53945.2021.9624750\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid development of Internet and online data explosions elicit strong aspirations for users to search for semantic relevant information based on available samples. While the data online are always released with different modalities like images, videos or texts, effective retrieval models should discover latent semantic information with different structures. Recently, the state-of-the-art deep cross modal retrieval frameworks have effectively enhanced the performance on commonly-used platforms using the deep neural networks (DNNs). Yet DNNs have been verified to be easily misguided by small perturbations, and there are already several attack generation methods proposed on DNN-based models for real-world tasks, but they are all focused on supervised tasks like classification or object recognition. To effectively evaluate the robustness of deep cross-modal retrieval frameworks, in this paper, we propose a retrieval-based adversarial perturbation generation method, and demonstrate that our perturbation could effectively attack the state-of-the-art deep cross-modal and single image retrieval hashing models.\",\"PeriodicalId\":347770,\"journal\":{\"name\":\"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISI53945.2021.9624750\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISI53945.2021.9624750","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Attacking DNN-based Cross-modal Retrieval Hashing Framework with Adversarial Perturbations
The rapid development of Internet and online data explosions elicit strong aspirations for users to search for semantic relevant information based on available samples. While the data online are always released with different modalities like images, videos or texts, effective retrieval models should discover latent semantic information with different structures. Recently, the state-of-the-art deep cross modal retrieval frameworks have effectively enhanced the performance on commonly-used platforms using the deep neural networks (DNNs). Yet DNNs have been verified to be easily misguided by small perturbations, and there are already several attack generation methods proposed on DNN-based models for real-world tasks, but they are all focused on supervised tasks like classification or object recognition. To effectively evaluate the robustness of deep cross-modal retrieval frameworks, in this paper, we propose a retrieval-based adversarial perturbation generation method, and demonstrate that our perturbation could effectively attack the state-of-the-art deep cross-modal and single image retrieval hashing models.