联邦学习中非目标模型中毒攻击与防御机制的系统文献综述

Tabassum Anika
{"title":"联邦学习中非目标模型中毒攻击与防御机制的系统文献综述","authors":"Tabassum Anika","doi":"10.54480/slr-m.v3i4.42","DOIUrl":null,"url":null,"abstract":"In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.","PeriodicalId":355296,"journal":{"name":"Systematic Literature Review and Meta-Analysis Journal","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"systematic literature review on untargeted model poisoning attacks and defense mechanisms in federated learning\",\"authors\":\"Tabassum Anika\",\"doi\":\"10.54480/slr-m.v3i4.42\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.\",\"PeriodicalId\":355296,\"journal\":{\"name\":\"Systematic Literature Review and Meta-Analysis Journal\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Systematic Literature Review and Meta-Analysis Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54480/slr-m.v3i4.42\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systematic Literature Review and Meta-Analysis Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54480/slr-m.v3i4.42","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在过去的几年里,联邦学习为使用不同机器学习模型的用户的隐私问题提供了一个乐观的解决方案。但内部和外部对手利用这些模型存在风险。为了保护数据隐私和模型完整性,需要保护联邦学习模型免受攻击者的攻击。为此,需要及早发现模型质量受损的非目标模型中毒攻击。本研究的重点是寻找针对非目标模型中毒攻击的各种攻击、检测和防御机制。通过搜索Google Scholar、ScienceDirect和Scopus,共发现245项研究。通过筛选标准后,本系统文献综述仅纳入15项研究。我们强调了在相关研究中发现的攻击和防御机制。此外,还建议进一步研究该领域的途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
systematic literature review on untargeted model poisoning attacks and defense mechanisms in federated learning
In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信