{"title":"针对中毒攻击的隐私保护联合学习调查","authors":"Feng Xia, Wenhao Cheng","doi":"10.1007/s10586-024-04629-7","DOIUrl":null,"url":null,"abstract":"<p>Federated learning (FL) is designed to protect privacy of participants by not allowing direct access to the participants’ local datasets and training processes. This limitation hinders the server’s ability to verify the authenticity of the model updates sent by participants, making FL vulnerable to poisoning attacks. In addition, gradients in FL process can reveal private information about the local dataset of the participants. However, there is a contradiction between improving robustness against poisoning attacks and preserving privacy of participants. Privacy-preserving techniques aim to make their data indistinguishable from each other, which hinders the detection of abnormal data based on similarity. It is challenging to enhance both aspects simultaneously. The growing concern for data security and privacy protection has inspired us to undertake this research and compile this survey. In this survey, we investigate existing privacy-preserving defense strategies against poisoning attacks in FL. First, we introduce two important classifications of poisoning attacks: data poisoning attack and model poisoning attack. Second, we study plaintext-based defense strategies and classify them into two categories: poisoning tolerance and poisoning detection. Third, we investigate how the combination of privacy techniques and traditional detection strategies can be achieved to defend against poisoning attacks while protecting the privacy of the participants. Finally, we also discuss the challenges faced in the area of security and privacy.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A survey on privacy-preserving federated learning against poisoning attacks\",\"authors\":\"Feng Xia, Wenhao Cheng\",\"doi\":\"10.1007/s10586-024-04629-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Federated learning (FL) is designed to protect privacy of participants by not allowing direct access to the participants’ local datasets and training processes. This limitation hinders the server’s ability to verify the authenticity of the model updates sent by participants, making FL vulnerable to poisoning attacks. In addition, gradients in FL process can reveal private information about the local dataset of the participants. However, there is a contradiction between improving robustness against poisoning attacks and preserving privacy of participants. Privacy-preserving techniques aim to make their data indistinguishable from each other, which hinders the detection of abnormal data based on similarity. It is challenging to enhance both aspects simultaneously. The growing concern for data security and privacy protection has inspired us to undertake this research and compile this survey. In this survey, we investigate existing privacy-preserving defense strategies against poisoning attacks in FL. First, we introduce two important classifications of poisoning attacks: data poisoning attack and model poisoning attack. Second, we study plaintext-based defense strategies and classify them into two categories: poisoning tolerance and poisoning detection. Third, we investigate how the combination of privacy techniques and traditional detection strategies can be achieved to defend against poisoning attacks while protecting the privacy of the participants. Finally, we also discuss the challenges faced in the area of security and privacy.</p>\",\"PeriodicalId\":501576,\"journal\":{\"name\":\"Cluster Computing\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10586-024-04629-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10586-024-04629-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A survey on privacy-preserving federated learning against poisoning attacks
Federated learning (FL) is designed to protect privacy of participants by not allowing direct access to the participants’ local datasets and training processes. This limitation hinders the server’s ability to verify the authenticity of the model updates sent by participants, making FL vulnerable to poisoning attacks. In addition, gradients in FL process can reveal private information about the local dataset of the participants. However, there is a contradiction between improving robustness against poisoning attacks and preserving privacy of participants. Privacy-preserving techniques aim to make their data indistinguishable from each other, which hinders the detection of abnormal data based on similarity. It is challenging to enhance both aspects simultaneously. The growing concern for data security and privacy protection has inspired us to undertake this research and compile this survey. In this survey, we investigate existing privacy-preserving defense strategies against poisoning attacks in FL. First, we introduce two important classifications of poisoning attacks: data poisoning attack and model poisoning attack. Second, we study plaintext-based defense strategies and classify them into two categories: poisoning tolerance and poisoning detection. Third, we investigate how the combination of privacy techniques and traditional detection strategies can be achieved to defend against poisoning attacks while protecting the privacy of the participants. Finally, we also discuss the challenges faced in the area of security and privacy.