{"title":"减少机器学习即服务中的成员推理攻击","authors":"Myria Bouhaddi, K. Adi","doi":"10.1109/CSR57506.2023.10224960","DOIUrl":null,"url":null,"abstract":"The increasing use of Machine Learning as a Service (MLaaS) has raised privacy and security issues due to membership inference attacks. These attacks can extract sensitive information such as the identification of an individual's participation in a training dataset, by exploiting a binary classifier with limited access. The attacks exploit weaknesses in the decision boundaries of the model, and can lead to the disclosure of private information. However, the current defenses against such attacks, such as those based on differential privacy or regularization, have significant limitations. Therefore, further research is needed to develop effective defenses that maintain the utility of machine learning models while providing formal guarantees, even in the presence of strategic adversaries. In this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models as a service. We propose a defense mechanism that brings the attacker's inference classifier into a zone of uncertainty, rendering it unable to classify a data point as a member or non-member. This mechanism takes into account the attacker's behavior by modeling the interaction between defense and attacker as a game, considering potential gains in confidentiality and costs. Our experiments on two datasets demonstrate the effectiveness of our approach in mitigating membership inference attacks. Furthermore, our defense mechanism outperforms existing defenses by offering superior privacy-utility-performance tradeoffs.","PeriodicalId":354918,"journal":{"name":"2023 IEEE International Conference on Cyber Security and Resilience (CSR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating Membership Inference Attacks in Machine Learning as a Service\",\"authors\":\"Myria Bouhaddi, K. Adi\",\"doi\":\"10.1109/CSR57506.2023.10224960\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The increasing use of Machine Learning as a Service (MLaaS) has raised privacy and security issues due to membership inference attacks. These attacks can extract sensitive information such as the identification of an individual's participation in a training dataset, by exploiting a binary classifier with limited access. The attacks exploit weaknesses in the decision boundaries of the model, and can lead to the disclosure of private information. However, the current defenses against such attacks, such as those based on differential privacy or regularization, have significant limitations. Therefore, further research is needed to develop effective defenses that maintain the utility of machine learning models while providing formal guarantees, even in the presence of strategic adversaries. In this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models as a service. We propose a defense mechanism that brings the attacker's inference classifier into a zone of uncertainty, rendering it unable to classify a data point as a member or non-member. This mechanism takes into account the attacker's behavior by modeling the interaction between defense and attacker as a game, considering potential gains in confidentiality and costs. Our experiments on two datasets demonstrate the effectiveness of our approach in mitigating membership inference attacks. Furthermore, our defense mechanism outperforms existing defenses by offering superior privacy-utility-performance tradeoffs.\",\"PeriodicalId\":354918,\"journal\":{\"name\":\"2023 IEEE International Conference on Cyber Security and Resilience (CSR)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Cyber Security and Resilience (CSR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSR57506.2023.10224960\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Cyber Security and Resilience (CSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSR57506.2023.10224960","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mitigating Membership Inference Attacks in Machine Learning as a Service
The increasing use of Machine Learning as a Service (MLaaS) has raised privacy and security issues due to membership inference attacks. These attacks can extract sensitive information such as the identification of an individual's participation in a training dataset, by exploiting a binary classifier with limited access. The attacks exploit weaknesses in the decision boundaries of the model, and can lead to the disclosure of private information. However, the current defenses against such attacks, such as those based on differential privacy or regularization, have significant limitations. Therefore, further research is needed to develop effective defenses that maintain the utility of machine learning models while providing formal guarantees, even in the presence of strategic adversaries. In this paper, we focus on mitigating the risks of black-box inference attacks against machine learning models as a service. We propose a defense mechanism that brings the attacker's inference classifier into a zone of uncertainty, rendering it unable to classify a data point as a member or non-member. This mechanism takes into account the attacker's behavior by modeling the interaction between defense and attacker as a game, considering potential gains in confidentiality and costs. Our experiments on two datasets demonstrate the effectiveness of our approach in mitigating membership inference attacks. Furthermore, our defense mechanism outperforms existing defenses by offering superior privacy-utility-performance tradeoffs.