{"title":"Privacy-Preserving in Defending against Membership Inference Attacks","authors":"Zuobin Ying, Yun Zhang, Ximeng Liu","doi":"10.1145/3411501.3419428","DOIUrl":null,"url":null,"abstract":"The membership inference attack refers to the attacker's purpose to infer whether the data sample is in the target classifier training dataset. The ability of an adversary to ascertain the presence of an individual constitutes an obvious privacy threat if relate to a group of users that share a sensitive characteristic. Many defense methods have been proposed for membership inference attack, but they have not achieved the expected privacy effect. In this paper, we quantify the impact of these choices on privacy in experiments using logistic regression and neural network models. Using both formal and empirical analyses, we illustrate that differential privacy and L2 regularization can effectively prevent member inference attacks.","PeriodicalId":116231,"journal":{"name":"Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3411501.3419428","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
The membership inference attack refers to the attacker's purpose to infer whether the data sample is in the target classifier training dataset. The ability of an adversary to ascertain the presence of an individual constitutes an obvious privacy threat if relate to a group of users that share a sensitive characteristic. Many defense methods have been proposed for membership inference attack, but they have not achieved the expected privacy effect. In this paper, we quantify the impact of these choices on privacy in experiments using logistic regression and neural network models. Using both formal and empirical analyses, we illustrate that differential privacy and L2 regularization can effectively prevent member inference attacks.