{"title":"Federated Learning with Anomaly Client Detection and Decentralized Parameter Aggregation","authors":"Shu Liu, Yanlei Shang","doi":"10.1109/dsn-w54100.2022.00016","DOIUrl":null,"url":null,"abstract":"Federated learning is a framework for machine learning that is dedicated to data privacy protection. In federated learning, system cannot fully control the behavior of clients which can be faulty. These behaviors include sharing arbitrary faulty gradients and delaying the process of sharing due to Byzantine attacks or clients’ own software and hardware failures. In federated learning, the parameter server may also be faulty during gradient collection and aggregation, mainly including gradient-based training data inference and model parameter faulty update. The above problems may lead to reduced accuracy of federated learning model training, leakage of client privacy, etc. Existing research enhances the robustness of federated learning by exploiting the decentralization and immutability of Blockchain. For untrusted clients, most research is based on Byzantine fault tolerance to defend against clients indiscriminately, and may cause model accuracy reduction. In addition, most of the research focus on unencrypted gradients, and there is insufficient research on dealing with client anomalies in the case of gradient encryption. For untrusted parameter servers, existing research has problems in energy overhead and scalability. Aiming at the problems above, this paper studies the robustness of federated learning, and proposes a blockchain-based federated learning parameter update architecture PUS-FL. Through experiments simulating distributed machine learning on neural networks, we demonstrate that the anomaly detection algorithm of PUS-FL outperforms conventional gradient filters including geometric median, Multi-Krum and trimmed mean. In addition, our experiments also verify that the scalability-enhanced parameter aggregation consensus algorithm proposed in this paper(SE-PBFT) improves consensus scalability by reducing communication complexity.","PeriodicalId":349937,"journal":{"name":"2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/dsn-w54100.2022.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Federated learning is a framework for machine learning that is dedicated to data privacy protection. In federated learning, system cannot fully control the behavior of clients which can be faulty. These behaviors include sharing arbitrary faulty gradients and delaying the process of sharing due to Byzantine attacks or clients’ own software and hardware failures. In federated learning, the parameter server may also be faulty during gradient collection and aggregation, mainly including gradient-based training data inference and model parameter faulty update. The above problems may lead to reduced accuracy of federated learning model training, leakage of client privacy, etc. Existing research enhances the robustness of federated learning by exploiting the decentralization and immutability of Blockchain. For untrusted clients, most research is based on Byzantine fault tolerance to defend against clients indiscriminately, and may cause model accuracy reduction. In addition, most of the research focus on unencrypted gradients, and there is insufficient research on dealing with client anomalies in the case of gradient encryption. For untrusted parameter servers, existing research has problems in energy overhead and scalability. Aiming at the problems above, this paper studies the robustness of federated learning, and proposes a blockchain-based federated learning parameter update architecture PUS-FL. Through experiments simulating distributed machine learning on neural networks, we demonstrate that the anomaly detection algorithm of PUS-FL outperforms conventional gradient filters including geometric median, Multi-Krum and trimmed mean. In addition, our experiments also verify that the scalability-enhanced parameter aggregation consensus algorithm proposed in this paper(SE-PBFT) improves consensus scalability by reducing communication complexity.