{"title":"Transferable Stealthy Adversarial Example Generation via Dual-Latent Adaptive Diffusion for Facial Privacy Protection","authors":"Yuanbo Li;Cong Hu;Xiao-Jun Wu","doi":"10.1109/TIFS.2025.3607244","DOIUrl":null,"url":null,"abstract":"The widespread application of deep learning-based face recognition (FR) systems poses significant challenges to the privacy of facial images on social media, as unauthorized FR systems can exploit these images to mine user data. Recent studies have utilized adversarial attack techniques to protect facial privacy against malicious FR systems by generating adversarial examples. However, existing noise-based and makeup-based methods produce adversarial examples with noticeable noise or undesired makeup attributes, and suffers from low transferability issues. In this paper, we propose a novel stealthy-based approach, named Dual-latent Adaptive Diffusion Protection (DADP), which generates transferable stealthy adversarial examples consistent with the source images by the diffusion model to protect facial privacy. DADP effectively harnesses adversarial information within both the semantic and diffusion latent spaces to explore adversarial latent representations. Unlike traditional methods that rely on bounded constraints and sign gradient optimization, DADP employs adaptive optimization to maximize the utilization of adversarial gradient information and introduces latent regularization to constrain the adaptive optimization process, ensuring that the protected faces maintain high privacy and natural appearance. Extensive qualitative and quantitative experiments on the public CelebA-HQ and LADN datasets demonstrate the proposed method crafts more natural-looking stealthy adversarial examples with superior black-box transferability compared to the state-of-the-art methods. The code is released at <uri>https://github.com/LiYuanBoJNU/DADP</uri>","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9427-9440"},"PeriodicalIF":8.0000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11153602/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
The widespread application of deep learning-based face recognition (FR) systems poses significant challenges to the privacy of facial images on social media, as unauthorized FR systems can exploit these images to mine user data. Recent studies have utilized adversarial attack techniques to protect facial privacy against malicious FR systems by generating adversarial examples. However, existing noise-based and makeup-based methods produce adversarial examples with noticeable noise or undesired makeup attributes, and suffers from low transferability issues. In this paper, we propose a novel stealthy-based approach, named Dual-latent Adaptive Diffusion Protection (DADP), which generates transferable stealthy adversarial examples consistent with the source images by the diffusion model to protect facial privacy. DADP effectively harnesses adversarial information within both the semantic and diffusion latent spaces to explore adversarial latent representations. Unlike traditional methods that rely on bounded constraints and sign gradient optimization, DADP employs adaptive optimization to maximize the utilization of adversarial gradient information and introduces latent regularization to constrain the adaptive optimization process, ensuring that the protected faces maintain high privacy and natural appearance. Extensive qualitative and quantitative experiments on the public CelebA-HQ and LADN datasets demonstrate the proposed method crafts more natural-looking stealthy adversarial examples with superior black-box transferability compared to the state-of-the-art methods. The code is released at https://github.com/LiYuanBoJNU/DADP
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features