隐私保护深度学习的竞争对手攻击模型

Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang
{"title":"隐私保护深度学习的竞争对手攻击模型","authors":"Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang","doi":"10.1109/CCGridW59191.2023.00034","DOIUrl":null,"url":null,"abstract":"Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Competitor Attack Model for Privacy-Preserving Deep Learning\",\"authors\":\"Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang\",\"doi\":\"10.1109/CCGridW59191.2023.00034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.\",\"PeriodicalId\":341115,\"journal\":{\"name\":\"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGridW59191.2023.00034\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGridW59191.2023.00034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于深度学习模型通常要处理大量的数据,随之而来的隐私泄露问题越来越受到人们的关注。虽然已经提出了各种保护隐私的深度学习方法,但这些方法在某些情况下仍然存在隐私泄露的风险。为了更好地研究现有PPDL方法的安全性,我们建立了一个新的攻击模型,该模型可能被私有数据所有者的竞争对手利用。具体来说,我们假设竞争对手拥有一些与对方私有数据属于同一领域的数据,他对自己的数据应用与对方相同的PPDL方法来获得扰动数据,然后他训练一个模型来反演扰动数据。在四种情况下选择了基于数据扰动的PPDL方法,并研究了它们对所提出的竞争对手攻击模型(CAM)的安全性。在MNIST、CIFAR10和LFW三个公共数据集上的实验结果表明,所选择的方法容易受到CAM的攻击。CAM重建图像的识别精度平均比原始图像低10%左右,PSNR大于15。图像的轮廓和其他信息可以用肉眼看到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Competitor Attack Model for Privacy-Preserving Deep Learning
Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信