Jingwei Liu, Lai Wei, Yan Hu, Guangrong Zhao, Qing Yang, Guangdong Bai, Yiran Shen
{"title":"p-Blend: Privacy- and Utility-Preserving Blendshape Perturbation Against Re-identification Attacks in Virtual Reality.","authors":"Jingwei Liu, Lai Wei, Yan Hu, Guangrong Zhao, Qing Yang, Guangdong Bai, Yiran Shen","doi":"10.1109/TVCG.2025.3616736","DOIUrl":null,"url":null,"abstract":"<p><p>In this paper, we propose p-Blend, an efficient and effective blendshape perturbation mechanism designed to defend against both intra- and cross-app re-identification attacks in virtual reality. p-Blend provides privacy protection when streaming blendshape data to third-party applications on VR devices. In its design, we consider both privacy and utility. p-Blend not only perturbs blendshape values to resist re-identification attacks but also preserves the smoothness of facial animations and the naturalness of facial expressions, ensuring the continued usability of the data. We validate the effectiveness of p-Blend through extensive empirical evaluations and user studies. Quantitative experiments on a large-scale dataset collected from 45 participants demonstrate that p-Blend significantly reduces re-identification accuracy across a range of machine learning models. While pure-random perturbation fails to prevent attacks that exploit statistical features, p-Blend effectively mitigates these risks in both raw and statistical blendshape data. Additionally, user study results show that facial animations generated from p-Blend-perturbed blendshapes maintain greater smoothness and naturalness compared to those using purely random perturbation. The codes and dataset are available at https://github.com/jingwei1016/p-Blend.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3616736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we propose p-Blend, an efficient and effective blendshape perturbation mechanism designed to defend against both intra- and cross-app re-identification attacks in virtual reality. p-Blend provides privacy protection when streaming blendshape data to third-party applications on VR devices. In its design, we consider both privacy and utility. p-Blend not only perturbs blendshape values to resist re-identification attacks but also preserves the smoothness of facial animations and the naturalness of facial expressions, ensuring the continued usability of the data. We validate the effectiveness of p-Blend through extensive empirical evaluations and user studies. Quantitative experiments on a large-scale dataset collected from 45 participants demonstrate that p-Blend significantly reduces re-identification accuracy across a range of machine learning models. While pure-random perturbation fails to prevent attacks that exploit statistical features, p-Blend effectively mitigates these risks in both raw and statistical blendshape data. Additionally, user study results show that facial animations generated from p-Blend-perturbed blendshapes maintain greater smoothness and naturalness compared to those using purely random perturbation. The codes and dataset are available at https://github.com/jingwei1016/p-Blend.