{"title":"FedTop: a constraint-loosed federated learning aggregation method against poisoning attack","authors":"Che Wang, Zhenhao Wu, Jianbo Gao, Jiashuo Zhang, Junjie Xia, Feng Gao, Zhi Guan, Zhong Chen","doi":"10.1007/s11704-024-3767-z","DOIUrl":null,"url":null,"abstract":"<p>In this paper, we developed FedTop which significantly facilitates collaboration effectiveness between normal participants without suffering significant negative impacts from malicious participants. FedTop can both be regarded as a normal aggregation method for federated learning with normal data and stand more severe poisoning attacks including targeted and untargeted attacks with more loosen preconditions. In addition, we experimentally demonstrate that this method can significantly improve the learning performance in a malicious environment. However, our work still faces much limitations on data set choosing, base model choosing and the number of malicious models. Thus, our future work will be focused on experimentation with more scenarios, such as increasing the number of participants or designing more complex poisoning attacks on more complex data sets.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":"38 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers of Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11704-024-3767-z","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we developed FedTop which significantly facilitates collaboration effectiveness between normal participants without suffering significant negative impacts from malicious participants. FedTop can both be regarded as a normal aggregation method for federated learning with normal data and stand more severe poisoning attacks including targeted and untargeted attacks with more loosen preconditions. In addition, we experimentally demonstrate that this method can significantly improve the learning performance in a malicious environment. However, our work still faces much limitations on data set choosing, base model choosing and the number of malicious models. Thus, our future work will be focused on experimentation with more scenarios, such as increasing the number of participants or designing more complex poisoning attacks on more complex data sets.
期刊介绍:
Frontiers of Computer Science aims to provide a forum for the publication of peer-reviewed papers to promote rapid communication and exchange between computer scientists. The journal publishes research papers and review articles in a wide range of topics, including: architecture, software, artificial intelligence, theoretical computer science, networks and communication, information systems, multimedia and graphics, information security, interdisciplinary, etc. The journal especially encourages papers from new emerging and multidisciplinary areas, as well as papers reflecting the international trends of research and development and on special topics reporting progress made by Chinese computer scientists.