Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng
{"title":"联邦学习中的增强模型投毒攻击与多策略防御","authors":"Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng","doi":"10.1109/TIFS.2025.3555193","DOIUrl":null,"url":null,"abstract":"As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3877-3892"},"PeriodicalIF":6.3000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning\",\"authors\":\"Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng\",\"doi\":\"10.1109/TIFS.2025.3555193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"3877-3892\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10942405/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10942405/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning
As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features