Fast black box attack of face recognition system based on metaheuristic simulated annealing

Yao Wang, Chuan Liu, Junyi Hu, Aijun Li, Yi Zhang, Hongying Lu
{"title":"Fast black box attack of face recognition system based on metaheuristic simulated annealing","authors":"Yao Wang, Chuan Liu, Junyi Hu, Aijun Li, Yi Zhang, Hongying Lu","doi":"10.1117/12.2680397","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs), which have high accuracy prediction and stable network performance, have been widely deployed in various fields. However, the adversarial example, a sample of input data which has been modified very slightly in a way, may easily cause a DNN to maximize loss. Instead of white box attack being able to obtain gradient information, most DNN based systems in actual use can only be attacked by multiple queries. In this paper, we regard face recognition (FR) system as target, and propose a new method named SA-Attack to generate adversarial samples which cannot be distinguished by human within very limited queries. Experiments show that SA-Attack can successfully attack advanced face recognition models, including public and commercial solutions, which proves the practicability of our method.","PeriodicalId":201466,"journal":{"name":"Symposium on Advances in Electrical, Electronics and Computer Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Advances in Electrical, Electronics and Computer Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2680397","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks (DNNs), which have high accuracy prediction and stable network performance, have been widely deployed in various fields. However, the adversarial example, a sample of input data which has been modified very slightly in a way, may easily cause a DNN to maximize loss. Instead of white box attack being able to obtain gradient information, most DNN based systems in actual use can only be attacked by multiple queries. In this paper, we regard face recognition (FR) system as target, and propose a new method named SA-Attack to generate adversarial samples which cannot be distinguished by human within very limited queries. Experiments show that SA-Attack can successfully attack advanced face recognition models, including public and commercial solutions, which proves the practicability of our method.
基于元启发式模拟退火的人脸识别系统快速黑盒攻击
深度神经网络(Deep neural network, dnn)以其预测精度高、网络性能稳定等优点被广泛应用于各个领域。然而,对抗性的例子,一个输入数据的样本在某种程度上被修改得非常小,很容易导致DNN的损失最大化。而不是白盒攻击能够获得梯度信息,大多数基于深度神经网络的系统在实际使用中只能通过多个查询进行攻击。本文以人脸识别系统为目标,提出了一种名为SA-Attack的新方法,在有限的查询条件下生成人类无法识别的对抗样本。实验表明,SA-Attack可以成功攻击高级人脸识别模型,包括公共和商业解决方案,证明了我们的方法的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信