{"title":"基于微调的成员隐私保护对抗网络","authors":"Xiangyi Lu, Qing Ren, Feng Tian","doi":"10.1109/NaNA53684.2021.00082","DOIUrl":null,"url":null,"abstract":"With the development of machine learning, the issue of privacy leakage has attracted much attention. Member inference attack is an attack method that threatens the privacy of training datasets. It uses the model’s behavior to infer whether the input user record belongs to the training datasets, and then get the user’s private information according to the purpose of the model. This paper studies the member inference attack under the black box model. We design a defense mechanism to make the learning model and the inference attack model learn from each other, and use the gains from the attack model to fine-tune the last layer’s parameters of the learning model. The fine-tuned learning model can reduce the gains from the membership inference attack with less loss of prediction accuracy. We use different datasets to evaluate the defense mechanism on deep neural networks. The results show that when the training accuracy and test accuracy of the learning model convergence are similar, the learning model only losses about 1% of the prediction accuracy, which the accuracy of the member inference attack drops by a maximum of around 20%.","PeriodicalId":414672,"journal":{"name":"2021 International Conference on Networking and Network Applications (NaNA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Fine-tuning-based Adversarial Network for Member Privacy Preserving\",\"authors\":\"Xiangyi Lu, Qing Ren, Feng Tian\",\"doi\":\"10.1109/NaNA53684.2021.00082\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of machine learning, the issue of privacy leakage has attracted much attention. Member inference attack is an attack method that threatens the privacy of training datasets. It uses the model’s behavior to infer whether the input user record belongs to the training datasets, and then get the user’s private information according to the purpose of the model. This paper studies the member inference attack under the black box model. We design a defense mechanism to make the learning model and the inference attack model learn from each other, and use the gains from the attack model to fine-tune the last layer’s parameters of the learning model. The fine-tuned learning model can reduce the gains from the membership inference attack with less loss of prediction accuracy. We use different datasets to evaluate the defense mechanism on deep neural networks. The results show that when the training accuracy and test accuracy of the learning model convergence are similar, the learning model only losses about 1% of the prediction accuracy, which the accuracy of the member inference attack drops by a maximum of around 20%.\",\"PeriodicalId\":414672,\"journal\":{\"name\":\"2021 International Conference on Networking and Network Applications (NaNA)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Networking and Network Applications (NaNA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NaNA53684.2021.00082\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Networking and Network Applications (NaNA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NaNA53684.2021.00082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Fine-tuning-based Adversarial Network for Member Privacy Preserving
With the development of machine learning, the issue of privacy leakage has attracted much attention. Member inference attack is an attack method that threatens the privacy of training datasets. It uses the model’s behavior to infer whether the input user record belongs to the training datasets, and then get the user’s private information according to the purpose of the model. This paper studies the member inference attack under the black box model. We design a defense mechanism to make the learning model and the inference attack model learn from each other, and use the gains from the attack model to fine-tune the last layer’s parameters of the learning model. The fine-tuned learning model can reduce the gains from the membership inference attack with less loss of prediction accuracy. We use different datasets to evaluate the defense mechanism on deep neural networks. The results show that when the training accuracy and test accuracy of the learning model convergence are similar, the learning model only losses about 1% of the prediction accuracy, which the accuracy of the member inference attack drops by a maximum of around 20%.