{"title":"颤抖触发器:探索基于dnn的人脸识别后门的敏感性","authors":"Cecilia Pasquini, Rainer Böhme","doi":"10.1186/s13635-020-00104-z","DOIUrl":null,"url":null,"abstract":"Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.","PeriodicalId":46070,"journal":{"name":"EURASIP Journal on Information Security","volume":"142 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2020-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition\",\"authors\":\"Cecilia Pasquini, Rainer Böhme\",\"doi\":\"10.1186/s13635-020-00104-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.\",\"PeriodicalId\":46070,\"journal\":{\"name\":\"EURASIP Journal on Information Security\",\"volume\":\"142 1\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2020-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EURASIP Journal on Information Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s13635-020-00104-z\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURASIP Journal on Information Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13635-020-00104-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition
Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject’s face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally.
期刊介绍:
The overall goal of the EURASIP Journal on Information Security, sponsored by the European Association for Signal Processing (EURASIP), is to bring together researchers and practitioners dealing with the general field of information security, with a particular emphasis on the use of signal processing tools in adversarial environments. As such, it addresses all works whereby security is achieved through a combination of techniques from cryptography, computer security, machine learning and multimedia signal processing. Application domains lie, for example, in secure storage, retrieval and tracking of multimedia data, secure outsourcing of computations, forgery detection of multimedia data, or secure use of biometrics. The journal also welcomes survey papers that give the reader a gentle introduction to one of the topics covered as well as papers that report large-scale experimental evaluations of existing techniques. Pure cryptographic papers are outside the scope of the journal. Topics relevant to the journal include, but are not limited to: • Multimedia security primitives (such digital watermarking, perceptual hashing, multimedia authentictaion) • Steganography and Steganalysis • Fingerprinting and traitor tracing • Joint signal processing and encryption, signal processing in the encrypted domain, applied cryptography • Biometrics (fusion, multimodal biometrics, protocols, security issues) • Digital forensics • Multimedia signal processing approaches tailored towards adversarial environments • Machine learning in adversarial environments • Digital Rights Management • Network security (such as physical layer security, intrusion detection) • Hardware security, Physical Unclonable Functions • Privacy-Enhancing Technologies for multimedia data • Private data analysis, security in outsourced computations, cloud privacy