{"title":"联邦学习中非目标模型中毒攻击与防御机制的系统文献综述","authors":"Tabassum Anika","doi":"10.54480/slr-m.v3i4.42","DOIUrl":null,"url":null,"abstract":"In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.","PeriodicalId":355296,"journal":{"name":"Systematic Literature Review and Meta-Analysis Journal","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"systematic literature review on untargeted model poisoning attacks and defense mechanisms in federated learning\",\"authors\":\"Tabassum Anika\",\"doi\":\"10.54480/slr-m.v3i4.42\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.\",\"PeriodicalId\":355296,\"journal\":{\"name\":\"Systematic Literature Review and Meta-Analysis Journal\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Systematic Literature Review and Meta-Analysis Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54480/slr-m.v3i4.42\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systematic Literature Review and Meta-Analysis Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54480/slr-m.v3i4.42","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
systematic literature review on untargeted model poisoning attacks and defense mechanisms in federated learning
In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.