减少机器学习即服务中的成员推理攻击

Myria Bouhaddi, K. Adi
{"title":"减少机器学习即服务中的成员推理攻击","authors":"Myria Bouhaddi, K. Adi","doi":"10.1109/CSR57506.2023.10224960","DOIUrl":null,"url":null,"abstract":"The increasing use of Machine Learning as a Service (MLaaS) has raised privacy and security issues due to membership inference attacks. These attacks can extract sensitive information such as the identification of an individual's participation in a training dataset, by exploiting a binary classifier with limited access. The attacks exploit weaknesses in the decision boundaries of the model, and can lead to the disclosure of private information. However, the current defenses against such attacks, such as those based on differential privacy or regularization, have significant limitations. Therefore, further research is needed to develop effective defenses that maintain the utility of machine learning models while providing formal guarantees, even in the presence of strategic adversaries. In this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models as a service. We propose a defense mechanism that brings the attacker's inference classifier into a zone of uncertainty, rendering it unable to classify a data point as a member or non-member. This mechanism takes into account the attacker's behavior by modeling the interaction between defense and attacker as a game, considering potential gains in confidentiality and costs. Our experiments on two datasets demonstrate the effectiveness of our approach in mitigating membership inference attacks. Furthermore, our defense mechanism outperforms existing defenses by offering superior privacy-utility-performance tradeoffs.","PeriodicalId":354918,"journal":{"name":"2023 IEEE International Conference on Cyber Security and Resilience (CSR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating Membership Inference Attacks in Machine Learning as a Service\",\"authors\":\"Myria Bouhaddi, K. Adi\",\"doi\":\"10.1109/CSR57506.2023.10224960\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasing use of Machine Learning as a Service (MLaaS) has raised privacy and security issues due to membership inference attacks. These attacks can extract sensitive information such as the identification of an individual's participation in a training dataset, by exploiting a binary classifier with limited access. The attacks exploit weaknesses in the decision boundaries of the model, and can lead to the disclosure of private information. However, the current defenses against such attacks, such as those based on differential privacy or regularization, have significant limitations. Therefore, further research is needed to develop effective defenses that maintain the utility of machine learning models while providing formal guarantees, even in the presence of strategic adversaries. In this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models as a service. We propose a defense mechanism that brings the attacker's inference classifier into a zone of uncertainty, rendering it unable to classify a data point as a member or non-member. This mechanism takes into account the attacker's behavior by modeling the interaction between defense and attacker as a game, considering potential gains in confidentiality and costs. Our experiments on two datasets demonstrate the effectiveness of our approach in mitigating membership inference attacks. Furthermore, our defense mechanism outperforms existing defenses by offering superior privacy-utility-performance tradeoffs.\",\"PeriodicalId\":354918,\"journal\":{\"name\":\"2023 IEEE International Conference on Cyber Security and Resilience (CSR)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Cyber Security and Resilience (CSR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSR57506.2023.10224960\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Cyber Security and Resilience (CSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSR57506.2023.10224960","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器学习即服务(MLaaS)的使用越来越多,由于成员推理攻击而引发了隐私和安全问题。这些攻击可以通过利用具有有限访问权限的二元分类器提取敏感信息,例如识别个人在训练数据集中的参与情况。这些攻击利用了模型决策边界中的弱点,并可能导致私有信息的泄露。然而,目前针对此类攻击的防御,例如基于差异隐私或正则化的防御,具有明显的局限性。因此,需要进一步的研究来开发有效的防御,以保持机器学习模型的效用,同时提供正式的保证,即使在战略对手存在的情况下。在本文中,我们专注于减轻针对机器学习模型即服务的黑盒推理攻击的风险。我们提出了一种防御机制,将攻击者的推理分类器带入不确定区域,使其无法将数据点分类为成员或非成员。该机制通过将防御和攻击者之间的交互建模为游戏来考虑攻击者的行为,并考虑到机密性和成本方面的潜在收益。我们在两个数据集上的实验证明了我们的方法在减轻成员推理攻击方面的有效性。此外,我们的防御机制通过提供更好的隐私-效用-性能权衡,胜过现有的防御。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Mitigating Membership Inference Attacks in Machine Learning as a Service
The increasing use of Machine Learning as a Service (MLaaS) has raised privacy and security issues due to membership inference attacks. These attacks can extract sensitive information such as the identification of an individual's participation in a training dataset, by exploiting a binary classifier with limited access. The attacks exploit weaknesses in the decision boundaries of the model, and can lead to the disclosure of private information. However, the current defenses against such attacks, such as those based on differential privacy or regularization, have significant limitations. Therefore, further research is needed to develop effective defenses that maintain the utility of machine learning models while providing formal guarantees, even in the presence of strategic adversaries. In this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models as a service. We propose a defense mechanism that brings the attacker's inference classifier into a zone of uncertainty, rendering it unable to classify a data point as a member or non-member. This mechanism takes into account the attacker's behavior by modeling the interaction between defense and attacker as a game, considering potential gains in confidentiality and costs. Our experiments on two datasets demonstrate the effectiveness of our approach in mitigating membership inference attacks. Furthermore, our defense mechanism outperforms existing defenses by offering superior privacy-utility-performance tradeoffs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信