对抗扰动攻击基于dnn的跨模态检索哈希框架

Xingwei Zhang, Xiaolong Zheng, W. Mao
{"title":"对抗扰动攻击基于dnn的跨模态检索哈希框架","authors":"Xingwei Zhang, Xiaolong Zheng, W. Mao","doi":"10.1109/ISI53945.2021.9624750","DOIUrl":null,"url":null,"abstract":"The rapid development of Internet and online data explosions elicit strong aspirations for users to search for semantic relevant information based on available samples. While the data online are always released with different modalities like images, videos or texts, effective retrieval models should discover latent semantic information with different structures. Recently, the state-of-the-art deep cross modal retrieval frameworks have effectively enhanced the performance on commonly-used platforms using the deep neural networks (DNNs). Yet DNNs have been verified to be easily misguided by small perturbations, and there are already several attack generation methods proposed on DNN-based models for real-world tasks, but they are all focused on supervised tasks like classification or object recognition. To effectively evaluate the robustness of deep cross-modal retrieval frameworks, in this paper, we propose a retrieval-based adversarial perturbation generation method, and demonstrate that our perturbation could effectively attack the state-of-the-art deep cross-modal and single image retrieval hashing models.","PeriodicalId":347770,"journal":{"name":"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attacking DNN-based Cross-modal Retrieval Hashing Framework with Adversarial Perturbations\",\"authors\":\"Xingwei Zhang, Xiaolong Zheng, W. Mao\",\"doi\":\"10.1109/ISI53945.2021.9624750\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid development of Internet and online data explosions elicit strong aspirations for users to search for semantic relevant information based on available samples. While the data online are always released with different modalities like images, videos or texts, effective retrieval models should discover latent semantic information with different structures. Recently, the state-of-the-art deep cross modal retrieval frameworks have effectively enhanced the performance on commonly-used platforms using the deep neural networks (DNNs). Yet DNNs have been verified to be easily misguided by small perturbations, and there are already several attack generation methods proposed on DNN-based models for real-world tasks, but they are all focused on supervised tasks like classification or object recognition. To effectively evaluate the robustness of deep cross-modal retrieval frameworks, in this paper, we propose a retrieval-based adversarial perturbation generation method, and demonstrate that our perturbation could effectively attack the state-of-the-art deep cross-modal and single image retrieval hashing models.\",\"PeriodicalId\":347770,\"journal\":{\"name\":\"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISI53945.2021.9624750\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Intelligence and Security Informatics (ISI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISI53945.2021.9624750","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

互联网的快速发展和在线数据的爆炸式增长引发了用户基于可用样本搜索语义相关信息的强烈愿望。在线数据总是以不同的方式发布,如图像、视频或文本,有效的检索模型应该发现不同结构的潜在语义信息。近年来,最先进的深度交叉模态检索框架利用深度神经网络(dnn)在常用平台上有效地提高了检索性能。然而,dnn已经被证实很容易被小的扰动所误导,并且已经有几种基于dnn的模型的攻击生成方法被提出,用于现实世界的任务,但它们都集中在监督任务上,如分类或对象识别。为了有效地评估深度跨模态检索框架的鲁棒性,本文提出了一种基于检索的对抗摄动生成方法,并证明了我们的摄动可以有效地攻击最先进的深度跨模态和单图像检索哈希模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Attacking DNN-based Cross-modal Retrieval Hashing Framework with Adversarial Perturbations
The rapid development of Internet and online data explosions elicit strong aspirations for users to search for semantic relevant information based on available samples. While the data online are always released with different modalities like images, videos or texts, effective retrieval models should discover latent semantic information with different structures. Recently, the state-of-the-art deep cross modal retrieval frameworks have effectively enhanced the performance on commonly-used platforms using the deep neural networks (DNNs). Yet DNNs have been verified to be easily misguided by small perturbations, and there are already several attack generation methods proposed on DNN-based models for real-world tasks, but they are all focused on supervised tasks like classification or object recognition. To effectively evaluate the robustness of deep cross-modal retrieval frameworks, in this paper, we propose a retrieval-based adversarial perturbation generation method, and demonstrate that our perturbation could effectively attack the state-of-the-art deep cross-modal and single image retrieval hashing models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信