Shuangliang Li;Jinwei Wang;Hao Wu;Jiawei Zhang;Xin Cheng;Xiangyang Luo;Bin Ma
{"title":"Defense Against Adversarial Faces at the Source: Strengthened Faces Based on Hidden Disturbances","authors":"Shuangliang Li;Jinwei Wang;Hao Wu;Jiawei Zhang;Xin Cheng;Xiangyang Luo;Bin Ma","doi":"10.1109/TAI.2025.3527923","DOIUrl":null,"url":null,"abstract":"Face recognition (FR) systems, while widely used across various sectors, are vulnerable to adversarial attacks, particularly those based on deep neural networks. Despite existing efforts to enhance the robustness of FR models, they still face the risk of secondary adversarial attacks. To address this, we propose a novel approach employing “strengthened face” with preemptive defensive perturbations. Strengthened face ensures original recognition accuracy while safeguarding FR systems against secondary attacks. In the white-box scenario, the strengthened face utilizes gradient-based and optimization-based methods to minimize feature representation differences between face pairs. For the black-box scenario, we propose shielded gradient sign descent (SGSD) to optimize the gradient update direction of strengthened faces, ensuring the transferability and effectiveness against unknown adversarial attacks. Experimental results demonstrate the efficacy of strengthened faces in defending against adversarial faces without compromising the performance of FR models or face image visual quality. Moreover, SGSD outperforms conventional methods, achieving an average performance improvement of 4% in transferability across different attack intensities.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 7","pages":"1761-1775"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10836866/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Face recognition (FR) systems, while widely used across various sectors, are vulnerable to adversarial attacks, particularly those based on deep neural networks. Despite existing efforts to enhance the robustness of FR models, they still face the risk of secondary adversarial attacks. To address this, we propose a novel approach employing “strengthened face” with preemptive defensive perturbations. Strengthened face ensures original recognition accuracy while safeguarding FR systems against secondary attacks. In the white-box scenario, the strengthened face utilizes gradient-based and optimization-based methods to minimize feature representation differences between face pairs. For the black-box scenario, we propose shielded gradient sign descent (SGSD) to optimize the gradient update direction of strengthened faces, ensuring the transferability and effectiveness against unknown adversarial attacks. Experimental results demonstrate the efficacy of strengthened faces in defending against adversarial faces without compromising the performance of FR models or face image visual quality. Moreover, SGSD outperforms conventional methods, achieving an average performance improvement of 4% in transferability across different attack intensities.