Yao Wang, Chuan Liu, Junyi Hu, Aijun Li, Yi Zhang, Hongying Lu
{"title":"Fast black box attack of face recognition system based on metaheuristic simulated annealing","authors":"Yao Wang, Chuan Liu, Junyi Hu, Aijun Li, Yi Zhang, Hongying Lu","doi":"10.1117/12.2680397","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs), which have high accuracy prediction and stable network performance, have been widely deployed in various fields. However, the adversarial example, a sample of input data which has been modified very slightly in a way, may easily cause a DNN to maximize loss. Instead of white box attack being able to obtain gradient information, most DNN based systems in actual use can only be attacked by multiple queries. In this paper, we regard face recognition (FR) system as target, and propose a new method named SA-Attack to generate adversarial samples which cannot be distinguished by human within very limited queries. Experiments show that SA-Attack can successfully attack advanced face recognition models, including public and commercial solutions, which proves the practicability of our method.","PeriodicalId":201466,"journal":{"name":"Symposium on Advances in Electrical, Electronics and Computer Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Symposium on Advances in Electrical, Electronics and Computer Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2680397","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks (DNNs), which have high accuracy prediction and stable network performance, have been widely deployed in various fields. However, the adversarial example, a sample of input data which has been modified very slightly in a way, may easily cause a DNN to maximize loss. Instead of white box attack being able to obtain gradient information, most DNN based systems in actual use can only be attacked by multiple queries. In this paper, we regard face recognition (FR) system as target, and propose a new method named SA-Attack to generate adversarial samples which cannot be distinguished by human within very limited queries. Experiments show that SA-Attack can successfully attack advanced face recognition models, including public and commercial solutions, which proves the practicability of our method.