{"title":"对抗攻击的鲁棒深度面部属性预测","authors":"Kun Fang, Jie Yang","doi":"10.1145/3467707.3467737","DOIUrl":null,"url":null,"abstract":"Face recognition has always been a hot topic in research, and has also widely been applied in industry areas and daily life. Nowadays, face recognition models with excellent performance are mostly based on deep neural networks (DNN). However, recently researchers find that images added invisible perturbations could successfully fool neural networks, which is known as the so-called adversarial attack. The perturbed images, also known as adversarial examples, are almost the same as the original images, but neural network could give different and wrong predictions with high confidence on these adversarial examples. Such a phenomenon indicates the vulnerable robustness of neural network and thus casts a shadow on the security of DNN-based face recognition models. Therefore, in this paper, we focus on the facial attribute prediction task in face recognition, investigate the influence of adversarial attack on facial attribute prediction and give a solution on improving the robustness of facial attribute prediction models. Extensive experiment results illustrate that the solution could indeed produce much more robust results in facial attribute prediction against adversarial attacks.","PeriodicalId":145582,"journal":{"name":"2021 7th International Conference on Computing and Artificial Intelligence","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Robust Deep Facial Attribute Prediction against Adversarial Attacks\",\"authors\":\"Kun Fang, Jie Yang\",\"doi\":\"10.1145/3467707.3467737\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face recognition has always been a hot topic in research, and has also widely been applied in industry areas and daily life. Nowadays, face recognition models with excellent performance are mostly based on deep neural networks (DNN). However, recently researchers find that images added invisible perturbations could successfully fool neural networks, which is known as the so-called adversarial attack. The perturbed images, also known as adversarial examples, are almost the same as the original images, but neural network could give different and wrong predictions with high confidence on these adversarial examples. Such a phenomenon indicates the vulnerable robustness of neural network and thus casts a shadow on the security of DNN-based face recognition models. Therefore, in this paper, we focus on the facial attribute prediction task in face recognition, investigate the influence of adversarial attack on facial attribute prediction and give a solution on improving the robustness of facial attribute prediction models. Extensive experiment results illustrate that the solution could indeed produce much more robust results in facial attribute prediction against adversarial attacks.\",\"PeriodicalId\":145582,\"journal\":{\"name\":\"2021 7th International Conference on Computing and Artificial Intelligence\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 7th International Conference on Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3467707.3467737\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th International Conference on Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3467707.3467737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Deep Facial Attribute Prediction against Adversarial Attacks
Face recognition has always been a hot topic in research, and has also widely been applied in industry areas and daily life. Nowadays, face recognition models with excellent performance are mostly based on deep neural networks (DNN). However, recently researchers find that images added invisible perturbations could successfully fool neural networks, which is known as the so-called adversarial attack. The perturbed images, also known as adversarial examples, are almost the same as the original images, but neural network could give different and wrong predictions with high confidence on these adversarial examples. Such a phenomenon indicates the vulnerable robustness of neural network and thus casts a shadow on the security of DNN-based face recognition models. Therefore, in this paper, we focus on the facial attribute prediction task in face recognition, investigate the influence of adversarial attack on facial attribute prediction and give a solution on improving the robustness of facial attribute prediction models. Extensive experiment results illustrate that the solution could indeed produce much more robust results in facial attribute prediction against adversarial attacks.