低密度脂蛋白:基于标签的成员推理攻击的防御

Arezoo Rajabi, D. Sahabandu, Luyao Niu, B. Ramasubramanian, R. Poovendran
{"title":"低密度脂蛋白:基于标签的成员推理攻击的防御","authors":"Arezoo Rajabi, D. Sahabandu, Luyao Niu, B. Ramasubramanian, R. Poovendran","doi":"10.1145/3579856.3582821","DOIUrl":null,"url":null,"abstract":"The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting– it will perform very well on samples seen during training, and poorly on samples not seen during training. Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs). MIAs aim to determine whether a sample belongs to the dataset used to train a classifier (members) or not (nonmembers). Recently, a new class of label-based MIAs (LAB MIAs) was proposed, where an adversary was only required to have knowledge of predicted labels of samples. LAB MIAs used the insight that member samples were typically located farther away from a classification decision boundary than nonmembers, and were shown to be highly effective across multiple datasets. Developing a defense against an adversary carrying out a LAB MIA on DNN models that cannot be retrained remains an open problem. We present LDL, a light weight defense against LAB MIAs. LDL works by constructing a high-dimensional sphere around queried samples such that the model decision is unchanged for (noisy) variants of the sample within the sphere. This sphere of label-invariance creates ambiguity and prevents a querying adversary from correctly determining whether a sample is a member or a nonmember. We analytically characterize the success rate of an adversary carrying out a LAB MIA when LDL is deployed, and show that the formulation is consistent with experimental observations. We evaluate LDL on seven datasets– CIFAR-10, CIFAR-100, GTSRB, Face, Purchase, Location, and Texas– with varying sizes of training data. All of these datasets have been used by SOTA LAB MIAs. Our experiments demonstrate that LDL reduces the success rate of an adversary carrying out a LAB MIA in each case. We empirically compare LDL with defenses against LAB MIAs that require retraining of DNN models, and show that LDL performs favorably despite not needing to retrain the DNNs.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LDL: A Defense for Label-Based Membership Inference Attacks\",\"authors\":\"Arezoo Rajabi, D. Sahabandu, Luyao Niu, B. Ramasubramanian, R. Poovendran\",\"doi\":\"10.1145/3579856.3582821\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting– it will perform very well on samples seen during training, and poorly on samples not seen during training. Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs). MIAs aim to determine whether a sample belongs to the dataset used to train a classifier (members) or not (nonmembers). Recently, a new class of label-based MIAs (LAB MIAs) was proposed, where an adversary was only required to have knowledge of predicted labels of samples. LAB MIAs used the insight that member samples were typically located farther away from a classification decision boundary than nonmembers, and were shown to be highly effective across multiple datasets. Developing a defense against an adversary carrying out a LAB MIA on DNN models that cannot be retrained remains an open problem. We present LDL, a light weight defense against LAB MIAs. LDL works by constructing a high-dimensional sphere around queried samples such that the model decision is unchanged for (noisy) variants of the sample within the sphere. This sphere of label-invariance creates ambiguity and prevents a querying adversary from correctly determining whether a sample is a member or a nonmember. We analytically characterize the success rate of an adversary carrying out a LAB MIA when LDL is deployed, and show that the formulation is consistent with experimental observations. We evaluate LDL on seven datasets– CIFAR-10, CIFAR-100, GTSRB, Face, Purchase, Location, and Texas– with varying sizes of training data. All of these datasets have been used by SOTA LAB MIAs. Our experiments demonstrate that LDL reduces the success rate of an adversary carrying out a LAB MIA in each case. We empirically compare LDL with defenses against LAB MIAs that require retraining of DNN models, and show that LDL performs favorably despite not needing to retrain the DNNs.\",\"PeriodicalId\":156082,\"journal\":{\"name\":\"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579856.3582821\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579856.3582821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在医疗保健和金融等应用中,用于训练深度神经网络(DNN)模型的数据通常包含敏感信息。DNN模型可能会遭受过拟合——它在训练期间看到的样本上表现很好,而在训练期间没有看到的样本上表现很差。过度拟合模型已被证明容易受到基于查询的攻击,如成员推理攻击(mia)。MIAs的目的是确定样本是否属于用于训练分类器(成员)的数据集(非成员)。最近,提出了一类新的基于标签的MIAs (LAB MIAs),其中对手只需要具有预测样本标签的知识。LAB MIAs利用了成员样本通常比非成员样本离分类决策边界更远的洞察力,并且在多个数据集上显示出高度有效。对无法再训练的DNN模型实施LAB MIA的对手进行防御仍然是一个悬而未决的问题。我们提出LDL,一种针对LAB mia的轻量级防御。低密度脂蛋白的工作原理是在被查询的样本周围构建一个高维球体,这样模型的决策对于球体内样本的(有噪声的)变体是不变的。标签不变性的范围产生了歧义,并阻止了查询对手正确地确定样本是成员还是非成员。我们分析表征了对手在部署LDL时执行LAB MIA的成功率,并表明该公式与实验观察结果一致。我们在七个数据集(CIFAR-10、CIFAR-100、GTSRB、Face、Purchase、Location和Texas)上评估LDL,这些数据集具有不同大小的训练数据。所有这些数据集已被SOTA LAB MIAs使用。我们的实验表明,LDL降低了对手在每种情况下进行LAB MIA的成功率。我们通过经验比较LDL与需要重新训练DNN模型的LAB mia的防御,并表明LDL在不需要重新训练DNN的情况下表现良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
LDL: A Defense for Label-Based Membership Inference Attacks
The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting– it will perform very well on samples seen during training, and poorly on samples not seen during training. Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs). MIAs aim to determine whether a sample belongs to the dataset used to train a classifier (members) or not (nonmembers). Recently, a new class of label-based MIAs (LAB MIAs) was proposed, where an adversary was only required to have knowledge of predicted labels of samples. LAB MIAs used the insight that member samples were typically located farther away from a classification decision boundary than nonmembers, and were shown to be highly effective across multiple datasets. Developing a defense against an adversary carrying out a LAB MIA on DNN models that cannot be retrained remains an open problem. We present LDL, a light weight defense against LAB MIAs. LDL works by constructing a high-dimensional sphere around queried samples such that the model decision is unchanged for (noisy) variants of the sample within the sphere. This sphere of label-invariance creates ambiguity and prevents a querying adversary from correctly determining whether a sample is a member or a nonmember. We analytically characterize the success rate of an adversary carrying out a LAB MIA when LDL is deployed, and show that the formulation is consistent with experimental observations. We evaluate LDL on seven datasets– CIFAR-10, CIFAR-100, GTSRB, Face, Purchase, Location, and Texas– with varying sizes of training data. All of these datasets have been used by SOTA LAB MIAs. Our experiments demonstrate that LDL reduces the success rate of an adversary carrying out a LAB MIA in each case. We empirically compare LDL with defenses against LAB MIAs that require retraining of DNN models, and show that LDL performs favorably despite not needing to retrain the DNNs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信