Borja Arroyo Galende;Patricia A. Apellániz;Juan Parras;Santiago Zazo;Silvia Uribe
{"title":"Membership Inference Attacks and Differential Privacy: A Study Within the Context of Generative Models","authors":"Borja Arroyo Galende;Patricia A. Apellániz;Juan Parras;Santiago Zazo;Silvia Uribe","doi":"10.1109/OJCS.2025.3572244","DOIUrl":null,"url":null,"abstract":"Membership attacks pose a major issue in terms of secure machine learning, especially in cases in which real data are sensitive. Models tend to be overconfident in predicting labels from the training set. Nevertheless, its application has traditionally been limited to supervised models, while in the case of generative models we have found that there is a lack of theoretical foundations to bring this concept into the scene. Hence, this article provides the theoretical background in the context of membership inference attacks and their relationship to generative models, including the derivation of an evaluation metric. In addition, the link between these types of attack and differential privacy is shown to be a particular case. Lastly, we empirically show through simulations the intuition and application of the concepts derived.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"801-811"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11008817","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11008817/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Membership attacks pose a major issue in terms of secure machine learning, especially in cases in which real data are sensitive. Models tend to be overconfident in predicting labels from the training set. Nevertheless, its application has traditionally been limited to supervised models, while in the case of generative models we have found that there is a lack of theoretical foundations to bring this concept into the scene. Hence, this article provides the theoretical background in the context of membership inference attacks and their relationship to generative models, including the derivation of an evaluation metric. In addition, the link between these types of attack and differential privacy is shown to be a particular case. Lastly, we empirically show through simulations the intuition and application of the concepts derived.