联邦学习中的增强模型投毒攻击与多策略防御

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS
Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng
{"title":"联邦学习中的增强模型投毒攻击与多策略防御","authors":"Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng","doi":"10.1109/TIFS.2025.3555193","DOIUrl":null,"url":null,"abstract":"As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3877-3892"},"PeriodicalIF":6.3000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning\",\"authors\":\"Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng\",\"doi\":\"10.1109/TIFS.2025.3555193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"3877-3892\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10942405/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10942405/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

作为分布式学习的一种新范式,联邦学习已经在智能零售、金融、自动驾驶等工业领域得到了广泛的应用。然而,近年来提出了几种旨在攻击鲁棒聚合规则和降低模型精度的方案。这些方案在攻击期间不会保持梯度的符号统计不变。因此,基于签名统计的SignGuard方案可以抵御大多数现有的攻击。为了击败SignGuard和大多数现有的余弦或基于距离的聚合方案,我们提出了一种增强的模型中毒攻击,ScaleSign。具体来说,ScaleSign使用缩放攻击和符号修改组件分别获得具有更高余弦相似度的恶意梯度和修改恶意梯度的符号统计。此外,这两个分量对梯度大小的影响最小。然后,我们提出了MSGuard,一种基于余弦机制、符号统计和谱方法的多策略拜占庭鲁棒方案。形式分析证明了ScaleSign生成的恶意梯度比诚实梯度具有更接近的余弦相似度。大量实验表明,ScaleSign可以攻击大多数现有的拜占庭鲁棒规则,特别是对SignGuard的攻击成功率高达98.23%。MSGuard可以防御大多数现有的攻击,包括ScaleSign。具体来说,面对ScaleSign攻击,MSGuard的准确率比SignGuard提高了41.78%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning
As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信