Md. Shohidul Islam, Behnam Omidi, Ihsen Alouani, Khaled N. Khasawneh
{"title":"VPP:通过欠电压保护隐私的机器学习","authors":"Md. Shohidul Islam, Behnam Omidi, Ihsen Alouani, Khaled N. Khasawneh","doi":"10.1109/HOST55118.2023.10133266","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML) systems are susceptible to membership inference attacks (MIAs), which leak private information from the training data. Specifically, MIAs are able to infer whether a target sample has been used in the training data of a given model. Such privacy breaching concern motivated several defenses against MIAs. However, most of the state-of-theart defenses such as Differential Privacy (DP) come at the cost of lower utility (i.e, classification accuracy). In this work, we propose Privacy Preserving Volt $(V_{PP})$, a new lightweight inference-time approach that leverages undervolting for privacy-preserving ML. Unlike related work, VPP maintains protected models’ utility without requiring re-training. The key insight of our method is to blur the MIA differential analysis outcome by comprehensively garbling the model features using random noise. Unlike DP, which injects noise within the gradient at training time, VPP injects computational randomness in a set of layers’ during inference through carefully designed undervolting Specifically, we propose a bi-objective optimization approach to identify the noise characteristics that yield privacypreserving properties while maintaining the protected model’s utility. Extensive experimental results demonstrate that VPP yields a significantly more interesting utility/privacy tradeoff compared to prior defenses. For example, with comparable privacy protection on CIFAR-10 benchmark, VPP improves the utility by 32.93% over DP-SGD. Besides, while related noisebased defenses are defeated by label-only attacks, VPP shows high resilience to such adaptive MLA. More over, VPP comes with a by-product inference power gain of up to 61%. Finally, for a comprehensive analysis, we propose a new adaptive attacks that operate on the expectation over the stochastic model behavior. We believe that VPP represents a significant step towards practical privacy preserving techniques and considerably improves the state-of-the-art.","PeriodicalId":128125,"journal":{"name":"2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)","volume":"210 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VPP: Privacy Preserving Machine Learning via Undervolting\",\"authors\":\"Md. Shohidul Islam, Behnam Omidi, Ihsen Alouani, Khaled N. Khasawneh\",\"doi\":\"10.1109/HOST55118.2023.10133266\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine Learning (ML) systems are susceptible to membership inference attacks (MIAs), which leak private information from the training data. Specifically, MIAs are able to infer whether a target sample has been used in the training data of a given model. Such privacy breaching concern motivated several defenses against MIAs. However, most of the state-of-theart defenses such as Differential Privacy (DP) come at the cost of lower utility (i.e, classification accuracy). In this work, we propose Privacy Preserving Volt $(V_{PP})$, a new lightweight inference-time approach that leverages undervolting for privacy-preserving ML. Unlike related work, VPP maintains protected models’ utility without requiring re-training. The key insight of our method is to blur the MIA differential analysis outcome by comprehensively garbling the model features using random noise. Unlike DP, which injects noise within the gradient at training time, VPP injects computational randomness in a set of layers’ during inference through carefully designed undervolting Specifically, we propose a bi-objective optimization approach to identify the noise characteristics that yield privacypreserving properties while maintaining the protected model’s utility. Extensive experimental results demonstrate that VPP yields a significantly more interesting utility/privacy tradeoff compared to prior defenses. For example, with comparable privacy protection on CIFAR-10 benchmark, VPP improves the utility by 32.93% over DP-SGD. Besides, while related noisebased defenses are defeated by label-only attacks, VPP shows high resilience to such adaptive MLA. More over, VPP comes with a by-product inference power gain of up to 61%. Finally, for a comprehensive analysis, we propose a new adaptive attacks that operate on the expectation over the stochastic model behavior. We believe that VPP represents a significant step towards practical privacy preserving techniques and considerably improves the state-of-the-art.\",\"PeriodicalId\":128125,\"journal\":{\"name\":\"2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)\",\"volume\":\"210 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HOST55118.2023.10133266\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HOST55118.2023.10133266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VPP: Privacy Preserving Machine Learning via Undervolting
Machine Learning (ML) systems are susceptible to membership inference attacks (MIAs), which leak private information from the training data. Specifically, MIAs are able to infer whether a target sample has been used in the training data of a given model. Such privacy breaching concern motivated several defenses against MIAs. However, most of the state-of-theart defenses such as Differential Privacy (DP) come at the cost of lower utility (i.e, classification accuracy). In this work, we propose Privacy Preserving Volt $(V_{PP})$, a new lightweight inference-time approach that leverages undervolting for privacy-preserving ML. Unlike related work, VPP maintains protected models’ utility without requiring re-training. The key insight of our method is to blur the MIA differential analysis outcome by comprehensively garbling the model features using random noise. Unlike DP, which injects noise within the gradient at training time, VPP injects computational randomness in a set of layers’ during inference through carefully designed undervolting Specifically, we propose a bi-objective optimization approach to identify the noise characteristics that yield privacypreserving properties while maintaining the protected model’s utility. Extensive experimental results demonstrate that VPP yields a significantly more interesting utility/privacy tradeoff compared to prior defenses. For example, with comparable privacy protection on CIFAR-10 benchmark, VPP improves the utility by 32.93% over DP-SGD. Besides, while related noisebased defenses are defeated by label-only attacks, VPP shows high resilience to such adaptive MLA. More over, VPP comes with a by-product inference power gain of up to 61%. Finally, for a comprehensive analysis, we propose a new adaptive attacks that operate on the expectation over the stochastic model behavior. We believe that VPP represents a significant step towards practical privacy preserving techniques and considerably improves the state-of-the-art.