{"title":"隐私保护深度学习的竞争对手攻击模型","authors":"Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang","doi":"10.1109/CCGridW59191.2023.00034","DOIUrl":null,"url":null,"abstract":"Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Competitor Attack Model for Privacy-Preserving Deep Learning\",\"authors\":\"Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang\",\"doi\":\"10.1109/CCGridW59191.2023.00034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.\",\"PeriodicalId\":341115,\"journal\":{\"name\":\"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGridW59191.2023.00034\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGridW59191.2023.00034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Competitor Attack Model for Privacy-Preserving Deep Learning
Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.