D. M. Nguyen, Anh Nguyen, H. M. Tran, Trong Nhan Le, T. Quan
{"title":"针对黑盒人脸识别系统的物理可转移攻击","authors":"D. M. Nguyen, Anh Nguyen, H. M. Tran, Trong Nhan Le, T. Quan","doi":"10.1109/MAPR53640.2021.9585256","DOIUrl":null,"url":null,"abstract":"Recent studies have shown that machine learning models in general and deep neural networks like CNN, in particular, are vulnerable to adversarial attacks. Specifically, in terms of face recognition, one can easily deceive deep learning networks by adding a visually imperceptible adversarial perturbation to the input images. However, most of these works assume the ideal scenario where the attackers have perfect information about the victim model and the attack is performed in the digital domain, which is not a realistic assumption. As a result, these methods often poorly (or even impossible to) transfer to the real world. To address this issue, we propose a novel physical transferable attack method on deep face recognition systems that can work in real-world settings without any knowledge about the victim model. Our experiments on various state-of-the-art models with various architectures and training losses show non-trivial attack success rates. With the observed results, we believe that our method can enable further studies on improving adversarial robustness as well as security of deep face recognition systems.","PeriodicalId":233540,"journal":{"name":"2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Physical Transferable Attack against Black-box Face Recognition Systems\",\"authors\":\"D. M. Nguyen, Anh Nguyen, H. M. Tran, Trong Nhan Le, T. Quan\",\"doi\":\"10.1109/MAPR53640.2021.9585256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent studies have shown that machine learning models in general and deep neural networks like CNN, in particular, are vulnerable to adversarial attacks. Specifically, in terms of face recognition, one can easily deceive deep learning networks by adding a visually imperceptible adversarial perturbation to the input images. However, most of these works assume the ideal scenario where the attackers have perfect information about the victim model and the attack is performed in the digital domain, which is not a realistic assumption. As a result, these methods often poorly (or even impossible to) transfer to the real world. To address this issue, we propose a novel physical transferable attack method on deep face recognition systems that can work in real-world settings without any knowledge about the victim model. Our experiments on various state-of-the-art models with various architectures and training losses show non-trivial attack success rates. With the observed results, we believe that our method can enable further studies on improving adversarial robustness as well as security of deep face recognition systems.\",\"PeriodicalId\":233540,\"journal\":{\"name\":\"2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MAPR53640.2021.9585256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MAPR53640.2021.9585256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Physical Transferable Attack against Black-box Face Recognition Systems
Recent studies have shown that machine learning models in general and deep neural networks like CNN, in particular, are vulnerable to adversarial attacks. Specifically, in terms of face recognition, one can easily deceive deep learning networks by adding a visually imperceptible adversarial perturbation to the input images. However, most of these works assume the ideal scenario where the attackers have perfect information about the victim model and the attack is performed in the digital domain, which is not a realistic assumption. As a result, these methods often poorly (or even impossible to) transfer to the real world. To address this issue, we propose a novel physical transferable attack method on deep face recognition systems that can work in real-world settings without any knowledge about the victim model. Our experiments on various state-of-the-art models with various architectures and training losses show non-trivial attack success rates. With the observed results, we believe that our method can enable further studies on improving adversarial robustness as well as security of deep face recognition systems.