Membership Inference Attacks and Differential Privacy: A Study Within the Context of Generative Models

Borja Arroyo Galende;Patricia A. Apellániz;Juan Parras;Santiago Zazo;Silvia Uribe
{"title":"Membership Inference Attacks and Differential Privacy: A Study Within the Context of Generative Models","authors":"Borja Arroyo Galende;Patricia A. Apellániz;Juan Parras;Santiago Zazo;Silvia Uribe","doi":"10.1109/OJCS.2025.3572244","DOIUrl":null,"url":null,"abstract":"Membership attacks pose a major issue in terms of secure machine learning, especially in cases in which real data are sensitive. Models tend to be overconfident in predicting labels from the training set. Nevertheless, its application has traditionally been limited to supervised models, while in the case of generative models we have found that there is a lack of theoretical foundations to bring this concept into the scene. Hence, this article provides the theoretical background in the context of membership inference attacks and their relationship to generative models, including the derivation of an evaluation metric. In addition, the link between these types of attack and differential privacy is shown to be a particular case. Lastly, we empirically show through simulations the intuition and application of the concepts derived.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"801-811"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11008817","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11008817/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Membership attacks pose a major issue in terms of secure machine learning, especially in cases in which real data are sensitive. Models tend to be overconfident in predicting labels from the training set. Nevertheless, its application has traditionally been limited to supervised models, while in the case of generative models we have found that there is a lack of theoretical foundations to bring this concept into the scene. Hence, this article provides the theoretical background in the context of membership inference attacks and their relationship to generative models, including the derivation of an evaluation metric. In addition, the link between these types of attack and differential privacy is shown to be a particular case. Lastly, we empirically show through simulations the intuition and application of the concepts derived.
隶属推理攻击与差分隐私:基于生成模型的研究
会员攻击在安全机器学习方面构成了一个主要问题,特别是在真实数据敏感的情况下。模型在预测训练集的标签时往往过于自信。然而,它的应用传统上仅限于监督模型,而在生成模型的情况下,我们发现缺乏将这一概念引入场景的理论基础。因此,本文提供了成员推理攻击及其与生成模型的关系的理论背景,包括评估度量的推导。此外,这些类型的攻击与差异隐私之间的联系是一个特殊的案例。最后,我们通过模拟实证证明了所推导的概念的直观性和应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信